It's always very nice to have your integration tests run out-of-the-box as soon as you check the code out of Git. No fiddling with your environment, just click and go.
Unfortunately, Hadoop uses some native Windows libraries and for those unfortunate enough to develop on Windows, this is a pain - particularly if you have a large, distributed team. So, this is a nice hack I've used a couple of times to get the integration tests to run on developers' laptops without any manual configuration.
First, check the binaries hadoop.dll and winutils.exe in. They're not very big (about 40kb). Then you need to set the System properties:
System.setProperty("java.library.path", BINARY_DIRECTORY)
System.setProperty("hadoop.home.dir", BINARY_DIRECTORY)
where BINARY_DIRECTORY is the absolute path name of the directory the binaries live in.
Then, before running Hadoop code, you need this dirty hack that introspectively assures Hadoop that everything is OK:
val classLoader = this.getClass.getClassLoader
val field = classOf[ClassLoader].getDeclaredField("usr_paths")
field.setAccessible(true)
val usrPath = field.get(classLoader).asInstanceOf[Array[String]]
val newUsrPath = new Array[String](usrPath.length + 1)
System.arraycopy(usrPath, 0, newUsrPath, 0, usrPath.length)
newUsrPath(usrPath.length) = BINARY_DIRECTORY
field.set(classLoader, newUsrPath)
val field = classOf[org.apache.hadoop.fs.FileSystem].getDeclaredField("FILE_SYSTEMS_LOADED")
field.setAccessible(true)
field.setBoolean(null, true)
val nativeCodeLoadedField = classOf[org.apache.hadoop.util.NativeCodeLoader].getDeclaredField("nativeCodeLoaded")
nativeCodeLoadedField.setAccessible(true)
nativeCodeLoadedField.set(null, false)
(This is Scala but it translates to Java easily enough)
It's ugly and it's dirty but it does the job. It basically tricks Hadoop into thinking everything is OK. It's good enough for tests but don't do it in production. Also, it's not at all necessary for *Nix systems.
Having done this, you can start running the integration tests against HDFS which is backed by your own hard disk. Again, you'd never do this in production (what's the point?) but it really helps with tests.
You do this with the following code (assuming you now want HBase to live on top of Hadoop):
val configuration = org.apache.hadoop.hbase.HBaseConfiguration.create()
configuration.set("fs.file.impl", classOf[org.apache.hadoop.fs.LocalFileSystem].getName)
configuration.set("fs.hdfs.impl", classOf[org.apache.hadoop.hdfs.DistributedFileSystem].getName)
val utility = new org.apache.hadoop.hbase.HBaseTestingUtility(configuration)
utility.startMiniCluster()
And you're good to go. HBase and HDFS both working on a single machine.
No comments:
Post a Comment