Morgan Stanley Takes On Big Data With Hadoop
When Morgan tried to do some portfolio analysis 18 months ago it found that traditional databases and grid computing just wouldn’t scale to the very large volumes of data that its data scientists wanted to use.
Gary Bhattacharjee, executive director of enterprise information management at the firm, had worked with Hadoop as early as 2008 and thought that it might provide a solution. So the IT department hooked up some old servers.
At the Fountainhead conference on Hadoop in Finance in New York, Bhattacharjee said the investment bank has started by stringing together 15 end of life boxes.
“It allowed us to bring really cheap infrastructure into a framework and install Hadoop and let it run.”
Instead of working with smaller sample sets, by using Hadoop the bank can work with large volumes of data from all angles, he explained this week.
“We decided to try Hadoop and MapReduce and that opened it up. We now have a very scalable solution for portfolio analysis.”