I have been using R for the past 1 year and am doing Facial Keypoints Detection and Classification of Music according to mood projects.I have applied the basic algorithms but now i need to use Neural Networks.But I haven’t been able to find any good resources on DL in R.There are a few articles here and there but nothing worthwhile and thorough.On the other hand Python has some very rich resources all over the internet on Deep Learning.Now what should i do?Should i analyse the data in R first and then use Python’s Deep Learning Tools or stick with R?If the latter then please mention some resources and if the former then please guide me how to start.Thank You.
There are some deep learning packages in R as well to my knowledge.
Thanks Shaswat.I’ll make sure to check out these link.So according to you,R should be enough for my requirements?Or should I switch to python?I only want neural networks for my project work.
I would be able to give a more precise answer if you could tell me the exact project you are work on. Whether it is image processing, or any other domain. However, python does have the most powerful tools to help you with deep learning.
@jalFaizy, I think you would be able to guide us here.
I am working on Kaggle’s Facial Keypoint Recognition.
What I would recommend is ask yourself the following questions
- Is deep learning worth the efforts (is deep learning essential for the completion of the project)?
- What is the timeline am I looking at for the project completion?
- Do I have the resources to run a full fledged deep learning model?
I would elaborate on the above points:
Deep learning is just another algorithm in Machine learning toolkit. It has its strengths and weaknesses. So its absolutely essential that you first understand your problem statement, talk to a knowledgeable person (project guide for example) and then decide what you want to do.
I would recommend that if you have atleast two months or more left for completion (and the above condition is met), go ahead on deep learning. DL requires extensive study and survey to understand all of its components properly. So plan accordingly.
Last but not the least, a good DL model requires above average specifications (minimum 4 GB RAM, a good processor (intel i5 or greater) and a GPU (preferably NVIDIA, 2+GB)). You could purchase Amazon AWS for a price but that is essential. Make sure you are ready for this.
As far as DL tools are concerned, R too spports DL. But as shashwat said, python has more powerful tools. So make sure before jumping on to python DL tools, you have ample time and resources to go through a completely new avenue (learning python and the tools).
PS: My personal experience, I explicitly wanted to implement DL algos, so I did a 6-month survey on DL and then went on implementing it. I had access to NVIDIA Titan X, 8GB RAM and an Intel i7 processor.
PPS: The fastest way to accomplish the task for you is to skip python, take R with H2O and work on your local machine (if it is good enough)
Thank you for such wonderful suggestions.I have access to a GPU-CUDA at my college lab.I only need DL to improve the results and for later projects.But i think i should go with H20(seems like a good package)as i’m familiar with R.The only reason i was considering Python is because i thought R doesnt have any good packages.But i think H20 should serve my purpose.Thank you jalFaizy and shaswat.2014.Any other suggestions?
I would start also with H2O because it incorporates the very nice feature to perform “grid search” (and “random search”) but its current implementation just offers feed forward deeplearning. H2O is doing a great job with the many documents available that can guide you to start defining your first net.
But, for the kind of project you have, I would recommend you to move to “mxnet” because it has convolutional, feed forward and recurrent networks. “mxnet” run in both CPU/GPU, with just changing one parameter. It is also available for “R” so you can integrate it in your development flow ant take advantage of the same things H2O provides.