[ICML 2016] [META] What makes a good paper, and submission in Deep Learning

[ICML 2016] [META] What makes a good paper, and submission in Deep Learning

 
所有 18 則留言

 

[–]BeatLeJuce 5 指標 5 天前* 

General tips have been posted here. For your specific questions:

Is SOTA setting a requirement

No. It helps, but novelty is much more important (i.e., if what you've done is not new/has been done before, forget about ICML).

Can we put in partially baked ideas with some results?

Definitely not, you'll be rejected with a "Nice try, come back once the idea is fleshed out and you've actually done the work". If you have any half-finished things, the best you can hope for is getting accepted at one of the workshops.

 

[–]ICML2016[S2 指標 5 天前 

Okay, that makes sense. I have two follow up questions if you don't mind:

  1. What makes a completely fleshed out solution?

  2. What exactly is a "Workshop" paper? Will it be considered as an ICML submission? What is the deadline for that?

 

[–]rhiever 5 指標 5 天前* 

A full paper generally requires a story -- you can explain the background, explain the idea, test the idea with experiments, then draw some broader conclusions from the experiments. If you can't write a complete narrative around your idea yet, then you're probably better off waiting and doing more work. There's not much worse than submitting a bad and half-assed paper.

Workshops usually have a deadline a month or two after the main conference deadline. They're typically organized around specific topics and hold their own sessions with talks. I believe that papers in workshops are still published in the conference proceedings (EDIT: this is not the case in ICML), but they're not as prestigious as getting published in the main conference proceedings. Basically, it's a second chance to present something at the conference.

 

[–]BeatLeJuce 3 指標 4 天前 

I believe that papers in workshops are still published in the conference proceedings

Not the case for ICML, NIPS etc. Workshop papers are usually not regarded as true publications (although arguably there are workshop papers every now and then that are really good, and those usually get the respect they deserve).

 

[–]rhiever 2 指標 4 天前 

Thanks for clarifying -- haven't published in ICML before.

 
 
 
 
 
 
 
 
 
 
 

 

[–]rhiever 3 指標 5 天前* 

On this topic: How well-received are evolution-based optimization papers at ICML? I was thinking about submitting something on my ML pipeline optimizer, but didn't have time because of other deadlines.

 

[–]XalosXandrez 1 指標 5 天前 

If you can compare against Bayesian Optimization / Random Search and show that you are better, you are in business.

 

[–]leonoel 2 指標 4 天前 

It has to compare favorably in both minimum search (which usually reflects in the accuracy) and in execution time though. Evolution based optimizers are usually way more complex than Bayesian Optimization or SGD.

They are very good for non differentiable functions, but other than that, is hard to make a case for them.

 

[–]pranv 1 指標 4 天前 

Also useful when local minima are few and are of poor quality

 
 
 
 
 
 
 
 
 

 

[–]Delster111 3 指標 4 天前 

Make sure title has words "Deep Learning" in it.

 
 
 

 

[–]555x 2 指標 4 天前 

Here's an interesting scenario:

You take an old algorithm and improve it (or claim you do), but you also throw more layers and more computer time at it than anyone else, and get SOTA results.

Should this be publishable?

 

[–]ICML2016[S1 指標 4 天前 

Yea.. Kinda sucks.. Good point.

 
 
 
 
 

 

[–]alexmlamb 3 指標 4 天前 

"Is SOTA setting a requirement. Can we put in partially baked ideas with some results?"

I think that the real goal is to stack more layers and pontificate about how the brain works. State of the art results are an unfortunate side-effect.

 

[–]drpout 1 指標 3 天前 

dude you rock. don't let anyone else tell you otherwise

 

[–]alexmlamb 1 指標 3 天前 

no one has told me otherwise

༼つ ◕_◕ ༽つ

 
 
 
 
 
 
 

 

[–]awhitesong 1 指標 4 天前 

One more thing. Can we make a slight change in the previous published papers (e.g. change RNN from LSTM to some other relatively new gates that have not been worked on yet) and show the results with rest almost the same. Will this be publishable?

 

[–]kjearns 1 指標 4 天前 

It depends how you write it and how good the results are. You will at least need to show improvement over what you are replacing; this is the kind of paper where a SOTA result or two will help a lot.

It will also help if you have a compelling reason why you settled on the architecture you did. Can you tell a story about why we should expect your version to be better?

 
 
 

 

[–]GoldmanBallSachs_ 1 指標 4 天前 

No

posted @ 2016-02-10 21:49  菜鸡一枚  阅读(353)  评论(0)    收藏  举报