Category Archives: Linux

Securing ESP8266 Communication

Having a tiny device based on the ESP8266 and connecting it directly to the internet, is not a trivial task. The ‘thing’ needs to communicate somehow to at least one remote entity, and it needs to do this in a reliable way.
Regular plain communication, like TCP, HTTP or WebSockets might work in a sandboxed environment, localised on a single monitored and protected network. However, if only one single byte needs to escape this comfort zone, then all precautions need to be taken.

NodeMCU 1.0
NodeMCU 1.0

There are several problems that need to be accounted for in order to secure the communication. Out of these, three will be handled in this article:

  1. End-to-end encrypted transmission
  2. Remote server identity
  3. Local device identity

One way to handle device identity is to flash a generated client key and certificate alongside with the sketch, and use them to authenticate against the server. However, this aproach is rather bulky, and hard to implement when provisining multiple devices.
A simpler option is to generate a unique set of device address/keys and implement safe storing and mapping mechanism:

  • the device stores its unique adress either in memory (if always on), hardcoded in the sketch, flashed as data, or stored in the EEPROM (a bit insecure, because it can easily be read). For additional security, the final device address can be interleaved with pieces of the ESP8266 hardware adress.
  • the remote server contains a mapper which maps a single device address to a unique device id, in this case maybe even human readable. This is required if the device owner needs to do some registration or administration, so he/she should not know the ‘physical id’ of the device. Also with this, the human factor of exposing the device adress is eliminated.

For the other two problems, the end-to-end encryption and remote server identity, the Arduino port for ESP8266 has a nice secure client library called WiFiClientSecure. It’s not really a state of the art, but if used properly, it will do the job just right.

The WiFiClientSecure uses a TLS implementation which is based on the axTLS library. It supports TLS 1.2, so generally a default up to date server configuration will work.
However, the ESP8266 is still a limited embedded device, and the library itself is also constrained. In order to properly use it, please be aware of the following:

  • the device can’t store and process trusted CAs in order to verify the server certificate. In order to do so, the SHA1 fingerprint is only saved and compared.
  • the axTLS library supports only the following cypher suites: TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256 and TLS_RSA_WITH_AES_256_CBC_SHA256
  • the library can also be a bit buggy of the certificate is too big, or if you’re dealing with big payloads
  • the server configuration needs to be as precise, complete and simple as possible, meaning full correct configuration with no redirects and no rewrites

Getting the pieces together

1. Server configuration (apache httpd)

In your apache.conf, you need to map port 443, enable SSL, and bind the certificate(s):

2. Check the configuration

Go to https://www.ssllabs.com/ssltest/analyze.html and scan your server.
In the results, first check the following sections:

  • Server Key And Certificate. They should all be valid, up to date, and the name of the certificate pointing to your domain/server name. Check the RSA KeySize as well, 2048bits works just fine, and according to the docs, axTLS can handle 4096 as well
  • In the configuration section, see if TLS 1.2 is there (TLS 1.0 and 1.1 will work too) and from the list of Cipher Suites search for the supported ones listed above. At least one of them should be present.
  • Find the Incorrect SNI alerts entry in the Protocol Details sections. It must not show any problem at all.

3. Extract the fingerprint

With Chrome:

  1. Open your website (use https if you use both HTTP/HTTPS)
  2. Open DeveloperTools
  3. Go to the security tab and click on View certificate
  4. Expand the details and go to the bottom to see the SHA-1 fingerprint

4. Client implementation

The Arduino code is rather straight forward, and a sample is included in the library examples. If you can’t find it in the examples menu, you can check it out from github: https://github.com/esp8266/Arduino/blob/master/libraries/ESP8266WiFi/examples/HTTPSRequest/HTTPSRequest.ino
For your implementation, adapt the server hostname, the location(s) to your resources and of-course the logic.
Since in most of the cases, the devices will periodically communicate to the server, you can instantiate the WiFiClientSecure object globally and reuse it at every loop.

5*. Debug (if needed)

If something goes wrong, simple tracing and logging throughout the code will not help you a lot. When the SSL comm breaks or fails, a little is shown directly. One layer of additional debugging output can be achieved by executing “Serial.setDebugOutput(true);” after initializing the Serial object in setup(). But most often, that will not be enough.
In order to get all the details, you need to enable full debugging. If you use a NodeMCU as a development board, and you have it selected in the Arduino, then the debugging option will not be visible.
To enable full debugging for the NodeMCU, follow these steps:

  1. from the boards menu, select Generic ESP8266 Module
  2. from the freshly appeared options, find Flash Size, and choose 4M (3M SPIFFS)
  3. select nodemcu as Reset Method
  4. from Debug Port choose Serial
  5. from Debug Level choose core + ssl (or even + TLS mem)

Re-flash your sketch and open Serial Monitor. You should now see a lot more info.

At the end, again, if something goes wrong, rarely a direct error message will appear. My suggestion here is, if you’re up to date with your Arduino SDK installation and board libraries, inspect in details your server settings, because there is not much you can do wrong on the client side.

DIY Java & Kubernetes (Videos and Samples)

This year, on three occasions, I presented about Kubernetes and how you can use it on the Google Cloud Platform, or your own home-built cluster.

Kubernetes RaspberryPi cluster
Kubernetes RaspberryPi cluster

 

Two of these presentations were recorded, and are available on youtube. The presentations, alongside with the code samples explained below, can be a great quick-start guide for making Java microservices or apps with Spring Boot and deploying them on a Kubernetes cluster.

The code samples (a bit modified though) are available on github: https://github.com/hsilomedus/kubernetes-samples.
In order to smoothly migrate from a single app deployment and basics of running Kubernetes, to splitting the app into microservices and deploying a full distributed scenario, I made two implementations:
– kub-calublog is a monolithic Spring Boot application which works with in-memory database and represents a very simple blog platform.
– kub-calublog-ui and kub-calublog-service are ‘microserviced’ pieces of the same application which use HATEOAS RESTful calls between them.
– kub-calublog-deployment contain the scripts and resources for deploying the app or services on a cluster. /simple is for basic single app deployement with docker-like commands, while /kubernetes is for the reccomended declarative methods. /kubernetes-arm is just a showcase if a RaspberryPi ARM based deployment is to be done.

Things to do in order to get started:
– Sign up for a Google Container Engine free trial.
– Create a project, allocate machines (up to 4 shared will do)
– Install gcloud SDK on your machine
– Watch the video(s) and experiment on your own, having the samples for help. (make sure that you replace properly the docker.prefix places in both the pom.xml files and deployment scripts)

Videos:
– jPrime ’16: https://www.youtube.com/watch?v=ek2Ydow7Xh8
– Voxxed Days Belgrade ’16: https://www.youtube.com/watch?v=eWDyNrdsPAg

Other resources:
– Devoxx presentation done by Ray Tsang (@saturnism) and Arjen Wassink (@ArjenWassink): https://www.youtube.com/watch?v=kT1vmK0r184
– gcp-live-k8s-visualizer: https://github.com/brendandburns/gcp-live-k8s-visualizer
– Setting up Kubernetes on a RaspberryPi cluster: http://blog.kubernetes.io/2015/12/creating-raspberry-pi-cluster-running.html
– RaspberryPi Kubeternetes stack 3d design: http://www.thingiverse.com/thing:1307094

OpenJFX with RaspberryPi

JavaFX on the Raspberry Pi is a particularly nifty platform to use when you need a nice looking GUI on a regular monitor or a touch-screen. The platform used to ship along with JDK8 for ARM directly and was bundled (last I saw it still was) with out of the box Raspbian.

However, from update 33,  Oracle decided to drop the support for JavaFX on the ARM distribution of their JDK, and stopped shipping it within as well.

However the story doesn’t end here. This change just expedited my idea to give OpenJFX a try, after which I wished I did that way sooner.

My test bed was a RaspberryPi with PiTFT and a touchscreen adapted JavaFX application. Previously I had implemented a headless start of the JavaFX application with fbcp running in the background, and having all the parameters for the touchscreen set right in order to get a nice and correct projection of the FX framebuffer. With OpenJFX, this was no longer needed, cause it is directly supported.

In order to quickly get and use OpenJFX on Raspbian, follow these steps:
– Get and flash a fresh Raspbian image. Make sure you have java and javac present.
– Download a built OpenJFX package (OpenJFX 8u## stable for armv6hf). I used @chriswhocodes OpenJFX builds. There are others listed at the OpenJFX community builds page.
– Extract the contents of the zip file. Copy the /jre/lib folder contents somewhere in your project (I simply copied everything to the project root folder).
– (include the /ext/jfxrt.jar file in your classpath if you compile in an environment without JavaFX present in the JDK)

Finally, when executing your java application, pass the following arguments to the JVM:
-Djava.ext.dirs=dir/to/jfx/ext (I used .dirs=ext because I unpacked the lib contents in the project root)
-Dglass.platform=Monocle
-Dmonocle.screen.fb=/dev/fb1  (only if you use a touchscreen (like the PiTFT)
-Dprism.order=sw (again only for touchscreen, but I’m not really sure. If you experience problems with eventual hardware rendering, use this)

The outcome was pretty pleasant. The UI was looking good and it adapted just fine, although be careful with the dimensions and the general conditions (see tips here).  Also, I had no need to calibrate the screen, it was working correctly from the first run. And last but not least, having not to use fbcp in the background is a huge performance boost and resource saver.

Examples:

Main screen with buttons LED light screen with color picker

 

Hi-Fi Audio Center with RuneAudio and RaspberryPi A+

For quite a long time, I had a RaspberryPi A+ just laying around and doing practically nothing. Every now and then I used it for some testing and occasionally playing around with the Unicorn HAT, but overall, because of the lack of connectivity and performance limitations, it remained heavily unused.

Then one day, I stumbled upon rpiMusicPlayer.com and I liked every aspect of the software and the concept behind it. Having a miniature audio center at my apartment which can connect to my music library and be able to accept music streaming, was a great idea. So I decided: my A+ will finally meet the reason it was created for.

The procedure for getting the whole thing is rather forward, but can be bumpy based on the equipment you have.

  1. Acquire a RaspberryPi A+ with a microSD card (preferably class 10, 8GB) and an adequate power source
  2. Think of how you want to connect the rasberry pi to your home network. The A+ doesn’t have an ethernet port, so if you want wired connection, get a USB to ethernet adapter. If not, do what I did, and get a wi-fi dongle. I used Tenda Pico wireless adapter. It was one of the supported adapters at the official RaspberryPi documentation site, and it hasn’t caused me any troubles before. Also, it’s rather cheap.
  3. Download the image for the rasbperry pi based software.
  4. Follow the steps for flashing the SD card with the image.
  5. Connect everything.

Now, if you have an usb to ethernet adapter, I suppose everything should work right off, if your router has DHCP enabled.

If however you use Wi-Fi for connectivity, the procedure is a bit more complicated, since the A+ has only one USB port and it will be used by the Wi-Fi dongle.

There are two possible ways of handling this:

  1. Try to connect a self-powered USB hub and attach the dongle + usb keyboard to it. Then, through the console, try to configure the Wi-Fi connection by using wifi-menu. Some details here. The username is ‘root’ and password is ‘rune’
    This however, regardless of how simple it seems, I failed to configure. Therefore:
  2. Get a B+, connect it to your router (or crossover to a computer) via ethernet and plugin the wi-fi dongle. Boot everything, and wait enough time (around 1-2 minutes) for the software to start. Then, from a computer on the same network, try to open http://runeaudio.local/ in your browser. If that fails, then seek for the IP address in your router (the hostname is runeaudio) and try with it (e.g. http://192.168.1.103/). In the menu open network and configure your wireless connection (http://runeaudio.local/network/). Next, shutdown the system via the UI and poweroff your B+. Eject the SD card, insert it in the A+, move the Wi-Fi dongle to the USB port as well and power the device on. If everything is OK, you should be able to access the system at runeaudio.local.

At this point, runeaudio will use the audio jack on the A+ to render the music at. However, this is not the perfect option since the Digital to Analog converter there is possibly trivial. Therefore, you might think of acquiring an external DAC which will improve the overall quality. Some of the supported DACs are listed in the homepage of the project. Bear in mind, that the DAC will add no value if you use cheap RCA cables or low-end audio system.

I used HiFiBerry DAC+ with RCA connectors. The price is fine and the shipment was quite fast. In order to configure runeaudio to use the external DAC, simply open the web UI, go to settings and choose your device from the listed I2C kernel modules. It will require a reboot which is not a problem. Then, open the web UI again, go to MPD and choose the Audio output interface to be the DAC you’ve just configured.

Sweet, sweet sound (RasPi A+ with HiFiBerry DAC+ running on RuneAudio)
Sweet, sweet sound (RasPi A+ with HiFiBerry DAC+ running on RuneAudio)

After installing and getting the whole thing up and running, you can resume on with configuring it.

RuneAudio has a pretty nice web UI which will allow you to do everything you need. By now, I haven’t got to the point where I need to access the device via SSH and do some manual config.

Out of music sources, you can add several. I would just mention shared network libraries, AirPlay, dlna and spotify.

RuneAudio WebUI
RuneAudio WebUI

As a conclusion, I must say I’m pretty satisfied with the outcome of this project. I’m mainly using it via AirPlay and so far it hasn’t caused me any troubles, regardless that it is connected solely by a cheap Wireless g USB dongle.

There might be some situations where the web UI will be non responding, some settings will not be saved from the first time or the loading spinner will not disappear (often happening during active air play stream). Besides that, the entire system is a rather cheap but effective, high-quality and most importantly, very usable.

DoorNFX: Touchscreen JavaFX 8 on Raspberry Pi

As of March 2014, Java8 is finally out there. Bunch of new features and improvements, not that they weren’t known previously, but good that they went official. The ones that I’m targeting with this blogpost are JavaFX, JDK8 on ARM devices, and their joint functionality.

The new JDK for ARM is targeted specifically for v6/v7 ARM HardFloat ABI devices running on Linux. The best and world-wide accepted example for this is the Raspberry Pi running on an OS like Raspbian. This JDK was around for some time with the early access program, so I had the chance to play around with it previously. However, for the example below, I’m using the official version.

JavaFX is, as the definition says, a set of graphics and media packages that enables developers to design, create, test, debug and deploy rich client applications that operate consistently across diverse platforms. In short, it’s a Java framework building Rich Internet or Desktop applications. Some of it’s features include:
– Pure Java API integrated in JavaSE: as from Java8, JavaFX is an integral part of the JRE and JDK. It’s API i in pure Java so it can be used any language that runs on the JVM.
– UI can be defined either programmatically or declaratively via FXML
– Interoperable with the old Swing
– All UI components can be styled with CSS
– New theme ‘Modena’ which makes the UI look very nice fora change
– General 3D features, hardware acceleration support
– WebView component which allows two-way interfacing (Java to JavaScript and vice-versa)
– Canvas and printing API, support for RichText
The easiest way to explore JavaFX is to play around with the Ensamble app on the Oracle web site.

THE DEVICE

DoorNFC GUI

So, as an example, I decided to make my NFC PN532 Java port to some usage and make some exact device out of it. My idea was to make a protected door access node which reads NFC Tags, prompts for a user code, authenticates it against some remote server and grants or declines access based on the output.

The core of the device is a RaspberryPi model B. The GPIO section has more than enough options for connecting multiple external devices. For the device, I’m using two such devices which are made specifically for the RaspberryPi: PiTFT and ITEAD PN532 NFC module.

HARDWARE

DoorNFC - Device

The touchscreen used is the adafriut 2.8” PiTFT resistive touchscreen with 320×240 resolution. It is a actually a Pi HAT device with a socket the same as the raspberry pi. Its assembly is very easy, and it’s usage with the Raspbian OS is relatively simple. For communicating with the RaspberryPi it uses the SPI interface.

The ITEAD NFC Module is a PN532 based board with an integrated antenna. It exposes the same functionality as all the other PN532 boards and uses either SPI, I2C or UART for communication. However, this device also has a native RaspberryPi header interface. One bad thing is, this interface can only work with SPI. Since the SPI and the same channel is already taken by the PiTFT, I made some alteration of the NFC module in order to patch it to work with the same header but by using I2C. I’ve described that procedure in my previous blogpost.

OPERATING SYSTEM

As the core for starting the device, I used the pre-built adafruit image of the Raspbian OS. This image is described in details in the adafruit tutorials section. Basically, it is a Raspbian OS with the patched kernel, driver and necessary configuration to enable and use the PiTFT. Besides all that, it also comes with JDK8 and nicely split GPU/CPU memory which is the core need for running JavaFX applications on the Pi.

With only the image however, the job for configuring the device is not done. First, the FrameBufferCopy tool (fbcp) will be needed:

Then,  the start-up console needs to be disabled. Do this by removing the fbcon map and fbcon font settings in /boot/cmdline.txt.

Next, and this is the trickiest part, the display needs to be adaptad to be with the same format as the PiTFT. The touchscreen is designed to be in portrait mode with resolution of 240×320. The original configuration of the X server done here is by rotating the display and re-calibrating the touchscreen. JavaFX runs in a framebuffer and it’s not connected to X whatsoever. Therefore, the display and the touchscreen behavior work differently and wrong. This can be fixed by force-adjusting the display resolution to 240×320 and not rotating the screen by default. In order to do so, alter the settings in /boot/config.txt:

and by resetting the rotation in /etc/modprobe.d/: rotate=0

At the end, in order to enable I2C, modify /etc/modules by adding:

and comment out i2c in /etc/modprobe.b/raspi-blacklist.conf.

SOFTWARE

The software for the device is already on github: https://github.com/hsilomedus/door-nfx

There are two packages present:

Writing JavaFX code for the Pi is rather straight forward. The  most important aspects have to be met at start, as they are more environment related. Others are just tips. Some that I can mention:

  • The CPU/GPU memory split needs to configured correctly in order to achieve nicer performances (or even to get the JavaFX app up and running). 128MB for the GPU is a decent amount.
  • The JavaFX app will run in a framebuffer. This is maybe the biggest difference that you must have in mind. Running JavaFX apps on the Pi is not conditioned by the presence of an X server: they don’t run in a widget or a frame and can be invoked straight from a console. Even better, running a JavaFX app from an X session will most likely break it and freeze the UI after you exit it. Always execute the JavaFX app from a console, local or remote.
  • Because the app will run in a Framebuffer, make sure that you use and manage all the visual space that you have in your disposal and run it in full screen. You can still run it with fixed size, but then it will most probably end up centered on the screen.
  • JavaFX will register it’s own Keyboard and mouse handler and render a mouse pointer. If you have some settings done in X that change the behavior of the mouse or the keyboard, they will not be present here. E.g.: a major problem with the touchscreen was the initial rotation. The screen was rotated, but the touchscreen was only calibrated for that in X. That’s why the settings here are reversed and the display is in portrait.
  • Last but not least: JavaFX runs in its own thread. If you are to populate other heavier operations from the main routine or an event handler, do it in a different thread. If you need to alter something UI related from a different thread, use Platform.runLater.

You can see some examples already implemented in the source code. It’s not very pragmatic or anything, but enough to get the idea and to get the device working.

A general frame of a basic JavaFX app looks something like this:

The other part of the code is the pi4j usage. Here I’m using the managed way to access the hardware aspect of the Pi and send/receive data through it:

  • I2C is used for communicating with the NFC PN532 module. The API is simple but the hard part is maintaining the protocol set by the device manufacturer:

    The adress of the device can be given by the manufacturer of the device,  or you can look it up with the tool i2cdetect or something similar. See some tips in this adafruit tutorial.
  • General I/O pin provisioning can be also combined, regardless that both SPI and I2C are used. Just make sure that you don’t provision a PIN somehow that will break the other two interfaces. Again, here I’m using the managed API of pi4j:

Some remarks about using pi4j:

  • Since pins need to be provisioned (exported), the java process MUST be started with sudo, or else it will fail.
  • I2C is used for communication, so make sure that the device is enabled and not blacklisted
  • The communication is not reliable. Your app should be prepared for that and easily recover from misscomunications.

RUNNING AND CONCLUSION

For ease of access, I’ve added two shell scripts (build.sh and run.sh) to make my compile&test experience on the Pi bearable. The pi4j library is automatically added in the classpath in both compile and test, the java process in run with sudo and fbcp is run in parallel.

The performance of the app itself is so-so. I can’t really deduct a conclusion since fbcp is an important parameter, and it may alter the visual response. Overall it is usable, but still not on that level that I want to see.

Always bear in mind that the device is quite limited with resources, and the platform itself is still catching up. It would be great if some ideas done in OpenJFX like setting a target framebuffer or altering the touchscreen input are implemented in the Oracle JDK too. That way, the output will be independent and I would presume more efficient.

P.S. I’ve also done a different JavaFX app which reads RFID tags, runs on an HDMI monitor, and is used as a poll. The output is quite bigger, the solution is simpler, but the overall user experience is still similar.

Overclocking the Raspberry Pi with OpenELEC

The Raspberry Pi, if limited only to rendering HD videos, handles the job quite nice. The hardware accelerated video playback makes things run smooth.

However, if you’re using some software like xbmc, or the complete solution with OpenELEC, then the navigation and everything else can get sloppy.

A nice way to speed things up there, and make the navigation smoother, is to overclock the Pi.

Along with some new firmware versions, the configuration file comes with some suggestion presets as options for boosting the Pi. With them, the arm, core and the memory frequency can be increased, and also the overvolt option.

Increasing the frequency or the overvolt parameters will inevitably increase the thermal output of the device. Now this can still be in acceptable limits, but if the pi is being used a lot in a prolonged period, it can become either unstable or freeze.
To be on the safe side, I recommend that you purchase some thermal sinks and stick them to the important chips on the board. I bought these from dx.com, and they are working just fine.

RasPi_sinksThe easiest way to do the overclock is to use your PC. Insert the SD card in the SD card reader, and open it to view its contents. Find the file named ‘config.txt’ and open it with a text editor.

Somewhere in it, you can find the following lines (or similar):

The Modest, Medium, High and Turbo are the suggested presets which, allegedly, have been tested. In order to use one, uncomment the four overclocking values, and insert the ones from the preset. I jumped directly to the last one, so my config at the end looked like this:

Note 1: the sdram_freq have been said that should be configured to 600 for the Turbo mode. Some users have reported SDcard corruption when said to be 500. I haven’t tested this out yet.
Note 2: leave the force_turbo=0. This means that the device will dynamically increase and decrease the frequency based on the load. This can prolong your Pi’s life, and will probably prevent some corruptions happening. Forcing the turbo mode uses the overclocked values as constants, but also voids your hardware warranty.

After editing the config.txt file, save it, and put the SD card back to the Pi and boot it. The system should now run with the changed settings.

Since the force_turbo option remains zero, the CPU frequency changes, and some tools will still report the same 700MHz. If you make some heavier load, this will change though. After some playing around, I snapped this:

RasPi_openelec_clockedAfter the overclocking, I got some very noticeable improvements in the xbmc navigation. Stuff started to move smoother. It’s still not perfect, but it’s much more manageable now.

Also, the thermal sinks seem to be really needed. The one on the cpu seems was quite hot after having the Pi run for about three hours.

pi4jmultimeter: Raspberry Pi + Arduino + d3js electrical multimeter

As a fun project and as an example for my recent talk at the Jazoon conference, I’ve made this device. I call it my pi4jmultimeter.

 What it does its some basic electrical multimeter features:

  • DC voltage measurement
  • AC waveline preview
  • AC spectrum analysis
  • Electrical resistance measurement

The device is a combination of a Raspberry Pi and an Arduino. The Arduino does the analog readings and uses Fast Fourier Transformation for producing the spectrum graph data. The Raspberry Pi hosts the main process and runs a lighttd web server. Both devices are connected with their extension headers (Raspberry Pi’s GPIO and Arduino’s Vin, GND and serial port pins) and have a paired serial port interface. The end result is visualized in a JavaScript web application which renders optimized d3js animated SVG graphs.

The Arduino loops a program which does simple analog reads on three inputs, periodical FFT and sends all this data to the Raspberry Pi via the serial interface. The Raspberry Pi, has a running Java8 SE Embedded process based on the pi4j and Java WebSocket libraries. It reads everything from the serial port with the help of the pi4j library, packs everything in a nice JSON format and broadcasts the data to every opened WebSocket connection by the use of the Java WebSocket library.

The whole source code with the build instructions can be found on GitHub: https://github.com/hsilomedus/pi4jmultimeter
The slides of my presentation are available at the jazoon guide webpage:  (or directly at slideshare)

The d3js graphs can be observed and used individually. All the graphs populated with some random data can be found here:

References:

Media center with OpenELEC and RaspberryPi

For some time I’ve been trying to get some nice home media center setup. The conditions were simple: local storage (external hard drives), xbmc, HD output (720p) and smooth running xbmc instance.

I’ve tried three options: Jailbroken AppleTV2, full-fledged PC running Ubuntu and last but not least, Raspberry Pi with an OpenELEC image.

The AppleTV2 is a very neat device, compact, lovely remote, HDMI output, ethernet connection and low power consumption. The jailbraking procedure is a bit tricky, can fail for some times, but with properly following the instructions, you’ll get the job done. Xbmc runs quite smoothly, and the installation places it on the home menu. At first, the setup was nice, and the rendered videos seemed OK, but then the harsh truth came visible: the AppleTV2 has hardware support for decoding x264 videos, and that’s it. The performances with everything else were simply not acceptable: lags, cuts, freezes. Verdict: not for usage.

The PC option worked very as expected: dual core processor capable of processing HD videos of any kind, enough processing and networking power for smooth operation and communication, strong display card with digital output (DVI) and more than enough sockets for connecting external and internal storage. However, I struggled a lot with the fan noise, and after trying several (expensive) options, I gave up. No matter how quiet the fans are, the silent buzzing will start to get on your nerves, and will kill the experience you want. Verdict: smooth, but not neat.

The third option was very underrated at the beginning. The RaspberryPi with its very limited processing power, storage and memory seemed like very unsuitable device for the task. However, I did some research and gave it a try. The facts that made it look promising and started to change my initial impressions were:
– it has hardware support for decoding x264 and MPEG4 videos, the two most likely codecs used in today’s ripped movies and videos.
–  xbmc has an option for bypassing DTS and AAC audio to the TV via HDMI. This way, the RaspberryPi won’t bother with decoding the audio, and most of the digital TVs today on the market have this feature.
– the HDMI output is CEC capable, which means that if the TV is also suitable, you can control the device via your TV remote.
– external drives can be mounted via the USB ports. Not directly at least, but with a self-powered USB hub, there’s no problem.

So I purchased a RaspberryPi (Model B, rev. 2, about 65 euros with a case) and the fun has begun. On the search for ideal OS, I rejected the Raspbian Wheezy at the start because it’s too generic, and I started looking for specialized media center OS. The wiki on xbmc.org site has some detailed explanations. The options were Raspbmc, XBian and OpenELEC. After some searching, I got to the impression that OpenELEC is the one that is most accepted by the community so I gave it a try.

OpenELEC is a small linux distro that makes your computer/device a compact media center running xbmc. The best installation procedure I could find is: download an image from here and follow the instructions here. In simple terms, you need:
– RaspberryPi Model B rev.2
– HDMI cable, Ethernet connection
– nice TV
– around 1A 5V MicroUSB power source
– self-powered USB hub
– the OS image file (.img)
– image file to SD writing software (Win32 disk imager for example)
– SD card (preferably 8 GB, just in case)

Unpack, write, insert the SD card in the RaspberryPi, connect, plugin.

The first boot as I noticed is a bit slower than usual, but successful. After some time, I got the xbmc home screen. Now, the xbmc performance in terms of navigation, processing, etc… is … well, not good. It can be slow, laggy, but eventually it will do what you have requested, so have some patience. Take some time to finalize the configuration: audio bypass, CEC settings (for the TV remote), remote control (if you wan’t to control the xbmc from other devices and software like Constellation or Yatse, media content directories (plugin the external storage, OpenELEC mounts it automatically), movies search and cache, etc …

If you finished everything fine, you’ll have a compact and well connected media center which you can control via your remote or a phone/tablet in the same network. The video performance is excellent, and so far I haven’t run into any rendering issues in neither x264, DivX or Xvid movies in 720p. Supposedly it can render 1080p without issues, but I didn’t wan’t to challenge it to this level, and also, the difference when considered my TV size is not noticeable at all.

Tips:
– use proper images, don’t take night builds since they will not update correctly.
– use a remoting device if possible. It makes the browsing and searching possible and tolerable , since the built one is quite slow.
– correct the CEC settings regarding turning on and off actions. Once you ‘turn off’ the device, you have to power off/on again in order to turn it back on.
– ALWAYS use self-powered USB hub, cause if the device draws too much current, it may burn the RaspberryPi.
– don’t use the BerryBoot option with OpenELEC. It’s just not good.

Apache virtual hosts: wordpress and subversion

A setup I made for my own needs: how to run wordpress and subversion server as virtual hosts on one Apache instance.

1. Install and configure Apache

If not included by default, you need to install and set the Apache HTTP server to run on boot.

Note: you should also take care of the configured runlevel. You can specify the runlevel for running httpd on boot by using chkconfig -level <runlevels> httpd on. To see if it’s fine, run chkconfig | grep httpd

Open a browser pointing to the IP (or domain if already configured, ‘localhost’ if working locally) of your web server. An example apache web page should appear.

2. Install and configure WordPress

2.1. Install and configure MySQL. (If you already have MySQL installed, skip this step)

2.2 Create a wordpress database and user

2.3. Setup wordpress

Now, the configuration for wordpress needs to be entered. Copy and open the wp-config:

Edit the following sections with the required data:

  • DB_NAME – the name of the created wordpress database (here is wordpress)
  • DB_USER – the username who has privileges for the database (here is wpuser)
  • DB_PASSWORD – the password for the user that you have entered previously
  • DB_HOST – if on remote host, the name of the host. Here, it’s localhost
  • DB_CHARSET and DB_COLLATE in most cases should remain unchanged.
  • Secret key values – use the generator here: https://api.wordpress.org/secret-key/1.1/salt/

2.4. Configure Apache first virtual host for wordpress

Edit the apache configuration file and enable virtual hosts:

I prefer to add separate config file for each virtual host. These files are added in the /etc/httpd/conf.d directory and are processed by Apache in alphabetical order.

Note that the first virtual host you configure, regardless of the path applied, is the default one which Apache will use if no recognizable path is found. This means that even if you configure the virtual host to be found at specific sub-domain, say blog.something.com, every address of this sub-domain will be also legal without the subdomain. Example: blog.something.com/something can be accessed also via something.com/something.

The configuration is now complete. Open a browser and point to the online configuration: something.com/wp-admin/install.php

3. Install and configure subversion

In order to have private, password protected repositories, create one or several passwords first:

Create two different repositories parent directories. One will be used for public repos, while the other one will be for the private, password protected ones:

In future, in order to create new repositories, just repeat the svnadmin and chown commands.

The final step here, is to configure the second apache virtual host dedicated for subversion. To do so:

The configuration is now all done, restart the httpd service once more and point to: svn.something.com/public/example or svn.something.com/private/example (the second one should prompt for username and password).

Useful links:
http://centoshelp.org/servers/database/installing-configuring-mysql-server/
http://codex.wordpress.org/Installing_WordPress
http://blog.adlibre.org/2010/03/10/how-to-install-wordpress-on-centos-5-in-five-minutes-flat/
http://httpd.apache.org/docs/2.2/vhosts/
http://wiki.centos.org/HowTos/Subversion/
http://schwuk.com/articles/2004/08/28/using-subversion-with-apache-virtual-hosts/