I would appreciate if anyone could advice on the best tool for such situation:
- we work in Advanced Analytics projects (~ 6 months period) with data scientists (such as myself) and other consultants
- in 6 months we have the clients to extract from their databases a large amount of data (1Tb to 2 Tb)
- the data should be available for both the machine learning models and for the consultants to perform exploratory analyses
- the tables could have up to 50Gb
- we don’t have Data Engineers on site
We have tried:
- SQLite: the problem here is that we have to make several copies of the same dataset to import to each local instance of SQLite
- mySQL: the data is available through the network, but we have problems when tables are larger than 10Gb (group by queries would take 12 hours to run)
- exchange csv files via SSH: it’s not ideal, but it works. The consultants can use ETL or local databases to manipulate the data. But we would like to have a single database available through the network where they could perform queries as well.
Some of the question we have are:
- would it make sense to use Hadoop in 1 machine (with many data nodes) in this situation (would it improve performance)?
- can the free version of MySQL be set-up to work efficiently with large tables? We have tried adding indexes, but either it takes a very long time to import (if the indexes are created before importing) or it takes a very long time to create the indexes (if done after importing the data). Our concern is that we are actually wasting time trying to make it work when it will not be efficient to handle the dataset we have
Thanks for your help!