Yo dawg, I heard you like browsing the web, so I put a browser in your browser so you can browse while you browse!1
Every now and then I run into projects which make me remember exactly why I got into sofware development—projects which have a magnetic pull from which I cannot escape. Sebastian Macke’s JavaScript OpenRISC emulator, jor1k, is just such a project.
About 99.998% of the credit for that demo goes to Sebastian. He’s the very talented person who built the initial emulator, who wrestled with a beta version of OpenRISC toolchain in order to build the image running this demo, and who spent a ridiculous amount of time optimizing the thing. Sure, you’ve seen other JS emulators, but how many are running at ~60 MIPS? How many give you a choice of three cores to run on, including an asm.js core? How many of them have framebuffer support? And just to toot my own horn, how many of them have ethernet support?
So how does that last part work, anyway?
I wrote an emulated OpenCores ethmac ethernet peripheral which sends/receives ethernet frames via a websocket. On the server side of the websocket, I wrote two stupid-simple servers which can pump the ethernet frames between the websocket and either a TAP device or a VDE2 virtual switch.
In theory, the whole thing scales out quite wonderfully. For this demo I’m able to proxy the websocket requests out to multiple machines using nginx, and VDE2 comes with facilities to join switches across multiple machines. For now it’s all running on one box, but if things get hairy I’ll spin up a couple more to handle the traffic.
How fast is it? Well, for the demo I’ve capped your bandwidth to around 40KiB/sec and limited you to two websockets per public ip. Sorry about that. However if I remove the bandwidth limit iperf says I’m getting between 5.5 and 7Mbit/sec back to the relay server. This could very well be limited by my own network connection. Also I’ve done zero actual optimization of the ethmac code, so I’m sure there are performance gains to be had. Fair warning, if you test performance against your own relay, the “simulation time” tends to run a bit slow so jor1k’s iperf will overestimate its bandwidth measurement. Go with the measurement on the server for more accuracy.
A lot of people will scoff at this and say it’s not useful. Atwood’s law and all that. Like I said in my opening paragraph, I didn’t do this because it was useful. I did it because it was really fucking fun.2
What makes this fun to me? First, there’s a happy convergence here of a gazillion different standards/specifications (or1k, JavaScript, HTML5, WebSockets, Linux, IEEE 802.3) and they’re all being composed in a way that would make any of the original authors say “who would’ve thought this would’ve been used to do that?” Second, I’m a huge fan of Linux, embedded systems, communication/networking technology, and OpenCores. This project had all of those things. Finally, I’m hoping that someone will find a practical use for this aside from something to scoff at. There’s real power behind the idea that you can run a full virtual machine inside of your browser in a nice sandboxed environment.
So, is this somehow useful to you? Leave a comment below!
1 Please, please be nice with your traffic while running that demo. All of your traffic is proxying back to a server I’m renting through DigitalOcean, and I’d really like it if they didn’t receive any complaints about the traffic it’s generating. Feel free to mess around with other hosts inside of the 10.5.0.0/16 subnet, just please don’t do anything to intentionally take down my server (10.5.0.1), and don’t send anything that could be considered malicious traffic outside of that subnet.
2 Sebastian and I haven’t discussed it directly, but I’m sure I can safely speak for him and say that he feels the same way.