Video Demo – Introduction to MongoSluice
Finally, the tool you’re looking for to get not only faster results, but better results! MongoDB to RDBMS – Postgres, SQL Server, MySQL, Oracle, HPVertica, SQLite…
Check out our brand new video: Introduction to MongoSluice
Chris: Alright, welcome everybody. This is a little tutorial, a little demo of Mongo Sluice, which is going to move data from Mongo Db to any target RDBMS. We’re using MySQL but you can go to Sq Lite, PostgreSQL, Oracle, SQL Server, Vertical, and some other ones. You could check on our site for the details of all those. But we’re gonna do it in a way that requires no coding, it’s fully automated, we don’t need to know anything about your data in Mongo. And we’re going to use a test data set that we found online called Movie Details, it’s got movies, details, actors, locations, titles, this is a look at it. I forgot to say, this is Chris and Jacob, so Jacob is running a terminal over here, so let’s take a look at the top.
The very top, we could see we’ve got an ID there, and then the fields, all sorts of fields. Pretty structured, but what you’re going to see Mongo Sluice do is actually take this and create a bunch of tables. That’s really all we need to show on the Mongo Db site.
Let’s go over to Mongo Sluice. If you haven’t seen our site, if you’re here you’ve probably looked around but I’ll just reiterate it. Mongo Sluice is a jar file that you can throw on any Linux box, all it requires is a jvm. There’s a couple configurations, like here we see the connection names, the source database, the target database, there’s a couple of advanced features that we’ll show you about in a later demo. Job ID, push to SQL, we’re ready to execute this. What’s going to happen is, Mongo Sluice is going to go into Mongo Db, and it’s going to interrogate every single document in the collection. It’s gonna create a schema, a graph of the data that we could later even save to disk, but it’s going to prepare that in memory and then it’s going to batch process all the data inside of Mongo and shoot it into our target database. It never lands on disk, okay?
You can do small data sets, giant data sets, it works, so let’s, Jacob, all we have to do is hit enter here.
Jacob: It’s running.
Chris: Okay. So in this case we can bounce over to MySQL, wait for it to be run. I could talk a little more in detail while we wait for it to show up. What’s unique, it’s really a two-step process, the interrogation and the creation of the schema, and then the streaming of the data out. If you happen to know that your schema doesn’t change a lot and you want to save the schema to disk, you can use that again and it’s a lot less time and computation up front because that’s the hardest, most intensive process for Mongo Sluice to do, is figure out what the schema is.
Chris: How’s it looking, Jacob? Are we seeing the tables in?
Jacob: Now we press Share Tables, it’s gonna show it generated all those tables.
Chris: One, two, three, four, five, six, seven, eight, nine tables, and let’s take a look inside of actors. Scroll up to the top, we’ll see there’s a simple table. It’s got the actor, probably that middle one’s IMDB ID. In that first column is a MongoSluice created field called PK. It’s a primary key that we assign and in the next demo, we’ll show you how we can enforce or ascertain the relationships between the tables by including a foreign key that MongoSluice adds that defines the definitions between the tables.
Movie details ID is the second column, and the actors are in the third. There was zero understanding of what was in that collection for us. We simply pointed MongoSluice at MongoDB, and we said “Take all that data and give it to me over here in my SQL as nice tidy tables.” We hit Enter, computation was done on the Mongo server, created couple of temporary collections, but it got the schema. It interrogated, it looked through every single document and it would look at every single document and nested array, there’s zero sampling, so this is a complete picture of the data.
Once that schema is created, MongoSluice will trigger the streaming of the data over into MySQL and that’s it. Another advanced feature is for us to update that database based on changes, but that’ll be in a future demo. So that’s MongoSluice in, hopefully that was two minutes. Thanks for watching.
The Source: Business Service Providers It is an impressive feat for MongoSluice to be able to produce multiple tables that are linked together by foreign and primary keys. But can MongoSluice perform the same when data begins to get more complex? The answer is YES! ...read more
The Source: Movie Data Since 1951 Movies are cool, but this dataset was nasty: arrays, nested documents, and lots of different fields. Normally, this would be a nightmare to do any useful SQL type analysis on. However, MongoSluice bridges the gap by pushing it to...read more
MongoSluice is psyched to present at this year's ITAG (Innovation Technology Action Group) all-day event at Penn State Great Valley. Here's our title and presentation: A Wormhole to Big Data Insights: NoSQL to SQL in 30 Minutes This presentation will...read more