ofRaspberryPiVj: “VJing” with a Raspberry Pi

By | ofRaspberryPiVj, Raspberry Pi | No Comments

Since I got OpenFrameworks working on my Pi, I have been thinking about VJing with it a lot.
So last week I started working on a kind of VJing application for the Raspberry Pi. This is going to be my first real OpenFrameworks (or c++ for that matter) app after graduating. So it took some time to get up to speed, but I have made some good progress.

Right now I have clip selection working and thats it. I did not start on manipulation of clips yet.
Also, I am developing on my Ubuntu machine, and although I have the game controller working (via the awesome ofxGamepad addon!)

For development, I use the Thrustmaster Dual Analog 3 USB controller.Take a look at the project page on GitHib

Read More

Hacked an OpenFrameworks app onto my Pi

By | Raspberry Pi | No Comments

After I got OpenFrameworks running on my Raspberry Pi this morning, I wanted to have a kind of workflow to develop OpenFrameworks apps and deploy them on my Pi.

The website of openFrameworks advised some kind of cross compiling setup. Meaning, have a Linux desktop / laptop compile your apps and deploy that to the Pi. As I understand it, there are two ways of doing this. 1) Write the apps on your Pi itself and via some ssh magic, let the compiling being done by the desktop. After this is done, send the binary back to the Pi. Or 2) develop on a desktop and compile the app there. After this is completed, transfer the binary to the Pi. Basically the same except for the code writing platform.

Read More

Installed OpenFrameworks on my Rapsberry Pi

By | Raspberry Pi | No Comments

Recently I got myself a Raspberry Pi! I was as exited as when I received my first Arduino. I like to tinker with my Arduino. But I don’t spend nearly as much time with it as I would like.

When I got my Pi, things are about to change! Or at least, that’s what I keep telling myself :-). The main reason I got my Pi was to brush up my hardware skills. Only to find out that the GPIO pins are male on the Pi, as opposed to the IO pins on the Arduino. So there I was, sitting with my leds, breadboards and jumper wires, only to find out that I did not have a converter.

Read More

vangogh-vjing-0

VJing at the Van Gogh Museum

By | VJ | No Comments

Together with Vj PublicEmily, I did a VJing show at the Van Gogh Museum in Amsterdam. Every Friday evening they invite different artists to make music and do some VJing. Together with VeeJays.com, Vj PublicEmily and I prepared a show based on Van Gogh paintings which the museum provided us.

The theme of the evening is “Big City Life”. We tried to show the cities as we are experiencing them today and put the city life as Van Gogh himself next to it.

You can still see our work every Friday this January! Check the event page for more details.

What is the value of asking for permission?

By | Internet, Privacy, Random Thoughts | No Comments

Information gathering by large corporations has been a big issue lately. Google gets sued for harvesting WiFi locations, telecom companies that use Deep Packet Inspection (DPI) to track what kind of applications smart phone users are using, LinkedIn links users to advertisements and of course Facebook that has faced some issues regarding their users privacy.

Often it is heard that privacy no longer exists on the web. “With the arrival of social networks, users gave up their privacy” and “If you don’t want to be tracked, don’t share any information about yourself” are arguments I come across quite often. And although I can partly agree with the latter, I sure as hell reject the thought that privacy is from the past. Doing so not only clears the way completely to let a Orwellian scenario become reality, but increase the risk of identity theft significantly. No, privacy is something worth fighting for —so yes, I am a proud donator of Bits of Freedom.

However, the other argument —”Don’t share anything you don’t want to be public”— has some points to it. If you don’t want to lead thieves to your house, don’t post your holiday plans online. Or your address at all for that matter. But you still want to get in touch with your friends on Facebook. Yes you can say: “if you don’t agree with their privacy policy, don’t sign up”. But that does not change the fact that they are lying to their users. They present themselves as social network, however, they are a advertise company, just as Google is. And although that is as clear as day, it is just wrong non the less. But they are getting away with it.

There aren’t laws for these kind of things. I don’t know if and how there should be laws to protect the users. But in other fields there are regulations which companies must comply to. If I buy a iPod for example, the European Union has laws that forces warranty for a minimum of two years. So even if Apple gives only one year warranty. Maybe users of social networks should be protected like this as well. As said, in what form, I do not know yet.

But the fact remains that if I don’t want to give Facebook some information about myself, I won’t share it. Like my phone number. My phone number is precious to me, it is one of the few things that have a ‘direct connection’ to me. By that I mean that if my phone rings, I take the call. Because I know to who I gave my number, I know who to make time for if my phone rings. Call it naive, but I consider my phone number private. The thing I want to avoid the most is being contacted for commercial purposes by phone.

In the last few days I saw quite a few posts on Facebook warning users that their phone contacts are harvested.

Users of Facebook app on a smart phone can sync their address book to Facebook. They get a warning that their contacts information is being put online and that they should ask their contacts for permission to post it.

Wait…, What?!

I can put my phone contacts online, but I have to ask permission to my contacts first? And if I tap “sync”, whether I asked my 191 contacts or not, all contact information is being uploaded to Facebook. This includes names, phone numbers and email addresses.
First, who is reading those warnings? And second, who is going to ask their complete contact list for permission (without the option to not upload the ones that responded negatively)?

Ok, so it is your own fault if you don’t read the warning. Agreed. But lets view it from another perspective.
Here is a friend of mine, he has just bought this new smart phone and is super stoked about it. He downloads the Facebook app and syncs his complete contact list. “Thats cool! I still have my contacts even if I lose my phone!”. He did see some kind of warning “but hey, there was a “Next” button, so why bother reading?”.

Alright, his full contact list is being uploaded to Facebook. But now, my precious phone number is somewhere stored on the servers to. Complete with my profile connected to it.

The thing I wanted to prevented the most, has happened without my permission, or even knowing it for that matter. It all happened because a friend of mine decided it was all right.

My number is only visible to the one that uploaded my number. True, it is not publicly linked to my account (publicly is a keyword here). So other Facebook members won’t see it. But that is not what I am scared of. My number now falls under the privacy policy of Facebook. And they are still a company trying to make a profit. They don’t explicitly state that they WON’T sell my number to advertising agencies. To me, that sounds that they WILL sell my number to advertising agencies, together with my profile data.

And this is what bugs me the most, I take good care in what I post online and what not. But more and more, others are making the decision to post things of me online, without my knowledge or permission. And to me that is just wrong, even if I don’t have a Facebook profile, things about me are stored there. Things I consider to be private.

I know it is naive of me to think that big corporations are going to change, but I still see this as a problem. How should we address this? Educate our friends? Teach them about the web and how it works? Are there even more people that share my thoughts? Let me know in the comments!

Selection on touch devices

By | Graduation, Supportive Narrative | No Comments

One of the main problems in working with a touch device is the lack of tactile feedback. The screen you press touch is a flat surface. Without structure that can give you any reference of ‘where you are’. This problem is being addressed differently by different devices.

I took the iPad, iPod Touch and an Android phone to find out what the differences are between the devices. Besides that, I want to find out how the size of the device influence the experience.

The feedback to the user is handled by the software running on the device, as opposed to the hardware which is all the same (a piece of glass). Although each app can create it’s own way of handling feedback, most of the app’s created on Android and (especially) iOS are using pre made components from the SDK’s to handle common tasks such as sliders, data collections, buttons and such. I have divided this post by operating system, but it has not been my intention to give an opinion about the operating system itself. Nor is this an Android vs. iOS post. I just write down what I experience by using it.

iOS

iPad (iOS 4.3.2)

I don’t own one, so I had not worked whith it intensively before. Because I knew my way around an iPod Touch I am comfortable with iOS. I had high expectations of the iPad’s interaction model. Partly because of the size of the device, but also because I thought that Apple had put extra effort in it. This turned out to be a mistake.

The iPod Application

The height of a track is approximately the height of a finger tip, so pressing it is not an difficult task. That sounds obvious but the playhead on the scrub bar is much smaller. That actually is more difficult to press. When you drag the playhead across the line, your finger covers it completely. And because a finger is way bigger than the playhead, this leads to inaccuracy. The time is being updated, but I miss some actual feedback around my finger. iOS uses a magnifying glass when searching through text just above your finger, they should have implemented some kind of feedback while sliding over sliders. Kind of what YouTube does in it’s video’s. The volume bar on the other hand is a little bigger. This feels much more comfortable. But then again, I miss some represented data as my finger still covers the slider knob. In the Album and Genre sections, the albums and genres are represented by tiles. When these tiles are pressed they grow and flip around to show the content of the tile ‘on it’s back’.

This all works pretty good and everything is easily accessible. But the feedback that something is actually pressed, is by performing the desired action. So when I press a tile, I know that I did the right thing the moment the tile begins to grow and flips. But there is a slight delay between the two. And that delay is noticeable. When I press a song to play, it’s color changes to blue and then fades back to it’s original state. But it does that at the same time the song is starting to play. So if the song needs to be loaded, it takes some time before the visual feedback is applied. I talk about fractions of a second here, but still. It is notable. However, switching from view (Number/Artist/Album etc.) is instant.

The Photo View Application

In the overview, a single tab on a photo opens that photo. A long press gives the option to copy the photo. The thumbnails of the photo’s are bigger than a finger, so pressing it is not an difficult task.

When a photo is opened, it covers the entire screen. When you pinch it to zoom in, you go into zoom mode. To close the photo and go back you pinch it, but when you are in zoom mode, you have to return to default size before you can close it. Below the photo is a strip of very small thumbnails of all the other photo’s. This strip is very small. When you drag across these thumbnails, the corresponding photo opens immediately full screen. Again, no little popup by my finger, but immediately loading of the photo.

iPod Touch (iOS 4.2.1)

iOS runs exactly the same on an iPod Touch (or iPhone for that matter). But because the device is smaller, it feels all a bit more in proportion. I think it shows that iOS originally was intended for a device you could hold in one hand, and they applied it to the iPad as well. The feeling I’ve got with the scrub bar in the iPad is due to it’s context. Because the scrub bar on the iPod Touch is the exact same thing. But because the device is larger, the iPad one feels smaller.

The general ui of iOS

The buttons have a very subtile feedback, they slightly change color. The ui react to very subtile gestures. On the iPod that is not an issue because the device is small, so you expect to make small movements. But the iPad being a lot bigger, I expected I had to make bigger movements. This works two ways, on the one hand, because you can make small movements, the feeling of control is a bit bigger. But because it reacts to small movements, there is more room for accidental gestures.

Android

I am using a HTC Desire phone that runs Android version 2.3.3 with custom rom CyanogenMod 7.0.3. I think CyanogenMod comes almost as close to the core Android as possible.

The Contacts Application

Like the iOS lists, the contacts in the list of Android are around as big as a fingertip. Slightly bigger than the iOS ones. What stands out is that every interactable object is big. Even the scrollbar is big.

The Music player

The scrub bar is fairly big in the music player. Still, even if the playhead is big, a finger covers it easily. So while scrubbing, I miss an precise indicator around my finger on Android as well.

The general ui

Every thing you touch on Android immediately reacts. As soon as the device registers a touch on something interactive, that element is responding by changing color. The task to be performed can still be loading, but the feedback is already given. Every thing (that is interactible) in Android is big. That results in easy access to everything.

Conclusion

I have been a bit of a hairsplitter on these things. In general both of the OS’s work fine and they all have their pro’s and con’s. It’s clear that Apple goes for aesthetics where Google goes for functionality, that is not new. But what bug’s me on iOS, is that is trusts on it’s speed. So that once I tab on something, it doesn’t give me confirmation that I did the right thing. I have to wait until the actual action is performed before I know that I succeeded in what I had in mind. When the device is a little slow, this shows immediately. I am not talking seconds here, far from it. But just enough to be a little bugging. Android on the other hand gives you instant feedback.

Because iOS goes for aesthetics, the content is better layed out. It’s better proportioned than on Android. So Android can feel a little clumsy sometimes, but I think for sure that this results in fewer fails when pressing things.

What is become clear to me is that instant feedback helps so much. This can be further enhanced by vibrating the device (although an iPad does not vibrate to my knowledge). I still have to try out an Android tablet with Honeycomb (Android 3) because that OS is written with a tablet in mind, instead of a phone.

Gestures!!

By | Gestures, Graduation | No Comments

Finally! After two long weeks of frustration I took the second step. Detecting a gesture! I blogged earlier about being able to detect the individual fingers and keeping track of them. Since then I found the app KinectCoreVision. An app based on the famous CommunityCoreVision. But KCV was made to work with the Kinect and has finger detection support!!

The reason I want to go for the KCV app is because that it acts like a server that broadcasts Tuio data. I knew that there are several Tuio libraries to handle gesture detection. Since my project is all about the actual gestures themselves—and performing them by the user—and not about the technical side behind it, I want to use open source libraries to handle the technical details for me. The only problem was, I could not find a fitting/working one.

I knew they where out there—I have tried a lot—but I could not get it to work. I decided to switch from OpenFrameworks to ActionScript 3. I am a little more comfortable around AS3 so I thought that would speed up the process a little. After a long three days of flex/air sdk’s and FDT problems I finally got a tuio-as3 example working! It’s working with the KCV app and now I can successfully move a square around the screen by just pointing at it! It is very buggy and I that is because the finger detection in the KCV app is not very stable yet. But the example is looking promising!

Once I figure out how to stabilize the detected fingers, I can make a first prototype. When I do, I will post some footage.

(Multi) touch gestures on existing OS

By | Graduation, Supportive Narrative | 2 Comments

I have my research question heading in the right direction –it’s not definitive yet– and I have defined some sub questions, I had a chat with my teacher last Monday. All went pretty well, but he thinks I need to shift my focus more towards the practical research. I do agree with him to some extend, but I still want to have a solid understanding of what I am working with. I just need to find the right balance.

Since I am doing two things at the same time –building the system and writing the supportive narrative– I have decided that I work for a week on the same subject. So this week it was researching for the supportive narrative, and next week it is building / coding the system.

What I first want to do is get myself some gestures to work with for the rest of the project. I am planning on using four gestures and work them out in depth.

GUI and WIMP

Before I can talk about in what context multi touch operates, I need to explain the current way of computing. Although a GUI and WIMP are by far the most common ways of computing, there are still a lot of systems that uses command line or other methods like all kinds of servers for instance. It’s however out of the scope of this article to talk about those. The term WIMP is often incorrectly used as a synonym for GUI.

GUI stands for Graphical User Interface. It is an interface based on graphics rather than a command line.

WIMP stands for Window, Icons, Menu, Pointer. Different sources have different interpretations of the meaning, but the general concept stays the same. It’s an interface where applications run independently of each other, at the same time in their own window’s, objects like files are represented by icons, menu’s gives access to tasks and a pointing device to perform certain tasks in certain circumstances. Note however that the pointer is most of the time a mouse, but does not have to be. It can be a pen tablet just as easily. And –generally speaking– there is only one pointer.

The difference in GUI and WIMP is that because a WIMP interface relies on Icons and a Pointer –which are graphical by there nature– WIMP is considered a GUI. But not all GUI have to rely on all the –if any– WIMP elements.
The context of multi touch

A system based on multi touch is always a GUI and uses a lot of the WIMP elements but not all. There is no pointer for instance. On mobile phones –most of the time– the applications run one at the time and don’t have their own window. Applications can –if the os supports it– run in the background, but they don’t have a window.

Because the commonly used OSes we use today (Windows, OS X and Linux) are GUI’s based on WIMP, the multi touch we experience today on desktop and laptop’s –mobile touch devices have there own OS which actually is developed around touch input– takes over some of the pointer tasks.

Windows 7 gestures

Windows 7 supports natively multi touch gestures. The gestures make actions that where before difficult or not well known, more easily to do.

  • Zoom – the zoom gesture is performed by placing two fingers on the surface and moving them away or towards each other, away to zoom out, towards to zoom in. This is useful on viewing photo’s for instance. So although on any self-respecting photo view software there are zoom buttons available.
  • Single finger and two finger pan – Panning over an object larger than the viewport. This as well can be a great extension for viewing photo’s, but for any kind of object with large amount of data like a map for instance. Again, one could use scrollbars to do so, but it’s impossible to scroll diagonally with scrollbars. And while there was a mouse action to click and drag with certain buttons, that is not a well known/often used action.
  • Rotate – Rotating objects –again like photo’s– was only doable with a button. The limitation of that button is that it only rotates the object only as far as the developer decided it to. While with two fingers one could –with the sensitivity of the device in mind– rotate the object to any degree possible.
  • Two finger tab – Because the concept of only one pointer is out of the window, a two finger tab is different than a one finger tab. And as such, can be treated differently.
  • Press and tab – Just like the two finger tab, a press and tab is possible because of the multiple registration points. While one finger presses on the surface, a second finger can tab next to it.

These are not the only multi touch gestures that Windows 7 registers, but it gives an idea of how Windows 7 is using multi touch as an addition to it’s pointer concept. It takes actions that are less known or hard to to, and makes it better accessible.

OS X gestures

OS X is treating multi touch differently than Windows 7. Where the focus of Windows 7 lies on bringing ‘new’ or not well known actions to the attention, OS X uses multi touch to map gestures to existing actions, to new actions as well –like the photo manipulation in windows 7– but the focus lies on the existing ones. I give a few examples.

  • Scrolling, drag two fingers down/up – a substitute to the scrollbars.
  • Pinching, two fingers away or towards each other – same as the Windows 7 one.
  • Rotating, two fingers rotating around each other – same as the Windows 7 one.
  • Two finger panning – Basically the same as scrolling but in every direction. This one can only be used if the program is fit for it.
  • Three finger swipe, move three fingers across the surface – to go to the next previous tab/window, depending on the direction of the swipe.
  • Four finger swipe up/down – to enter or close expose
  • Four finger swipe left/right – to switch application, like the cmd+tab keystroke

The difference between Windows 7 and OS X

OS X is really trying to interact with the OS actions in a different way, there where Windows 7 seems to want to push new functionality. So they both have a different approach, although they share some similarities as well. What they still do is keeping the original OS in tact. Which is not surprising because multi touch is not broad supported yet. But when you compare that to mobile operating systems, they are taking full advantage of touch interaction. That is not a fair comparison, because a mobile OS does not have the performance of a PC, Nor does it need it. And the intentions of a mobile system is different. But it illustrates the contrast between the two.

When I think of an OS for a PC or laptop, I think that it can borrow some of the WIMP principles, but it really needs to be different in a lot of things. Naturally the single pointer concept needs to go out of the window. And in it’s place should come gestures that feels natural. And maybe the whole concept of on screen menu’s can go as well, and we summon menu’s by performing a gesture. That leaves room for the actual content in the application. Which way it will go, I don’t know yet, but I think that the single pointer concept has had it’s time.

Let me know what you think about it!

Narrowing down the Supportive Narrative

By | Graduation, Supportive Narrative | No Comments

The graduation consist out of two parts. The project –in my case the installation– and a ‘supportive narrative’ or thesis. The supportive narrative is supporting the project and the project supports the supportive narrative.

For the project, I knew what I wanted to do. But I was not quite sure how I should approach the supportive narrative. I knew I wanted to focus my research on the gestures used to perform certain tasks. So I decided to look at the gestures made in a multi touch interaction. I did this for a few reasons. First, there is already a lot research done into multi touch, so I can find a lot material where I can base mine on. Second, the multi touch gestures look a lot like the gestures I have in mind, so if I can base the ones in my project on common multi touch gestures I don’t have to reinvent the wheel. Third, I am confident I can find some software that is used to detect multi touch gestures. So the less time I need to build the actual system. The more time remains to do the actually research.

For now, I have defined my research question as follow:

Are the principles of multi-touch interaction applicable to a system based on touchless interaction?

Since I do not know a term to define an interaction form without physical touch, I have called it a ‘touchless ’ interaction.

To get the answer to this question I came up with subquestions. First of all I want to find out the definition of a gesture. When is a gesture a gestures, and what is it that makes it a gesture.

Second, I want to know which gestures I am going to use/try out on my project. I want to stay as close to actual computer tasks as possible so the project can be of actual use. So I want to know ‘What are common tasks on a multi touch device?’.  To answer this I first need to know what common computer tasks are in general. When I know that, I want to know, how these tasks are performed on multi touch devices. The gestures that come out of that questions are going to be my project gestures. To gain more knowledge about them I take a little detour and want to find out, if these gestures has any advantages over the way the task is handled with a mouse and keyboard.

Third, I want to see if the gestures rely on physical touch. Because my system is all about the removal of physical connection. If the gestures I am going to use, do rely on physical touch, then I want to see why they are doing so. And I want to see if I can come up with a way to have the physical element removed from the gesture. With physical touch, I mean for instance pressure sensitivity. I don’t know about any gestures that do rely on this out the top of my head. But I can imagine that I come around some. In Photoshop –and this is not a multi touch gesture, but it illustrates the point–, when you use a drawing tablet with a pen, the thickness of the brush is handled by the pressure you apply to it. I can imagine the depth value that is calculated by the Kinect to substitute for the pressure value. However, I don’t think that this gives the same level of precision, and thus the same ease of use.

The first part will end with a discussion part in which I reflect on the results and give my own interpretations of them. I will use this section to give my personal opinions and put them in to context.

When these subquestions are answered, I have a set of gestures to work with and I know how they work. At the same time I am planning to have the first prototype of my system ready.

So the second part of my supportive narrative will be the research of how people react on using the gestures. With the knowledge gained from the first part, I come up with a hypothesis for a gesture on each task to be performed. Then I do some user testing on each of the gestures and evaluate the results.

With those results I reform the hypothesis, and start the user test process all over. After 5 of these iterations I want to have the final gestures. The second part of the supportive narrative will end with a discussion section as well. This time I reflect on the practical research.

With the second part done it is time to draw the final conclusions. This will be a summery of the results and an explanation of the final gestures and how they can be implemented. In the conclusions part I will put a section ‘Limitations and open issues’. Here I describe the limitations of the project and research and what the open issues –if any– are.

When all this is done, and the project is running, I think I have a solid graduation project. I am still not sure what the form of the supportive narrative will be. The most logical form is on paper I guess, but I am playing with the thought of putting it online together with this blog, but as two separate things. That idea came from the web version Richard Rutter made of the book “The Elements of Typographic Style”. His “The Elements of Typographic Style Applied to the Web” is basically a book in website form and I do like this concept. But that is food for some thought some other time.

For now, if you have any comments on how I am planning my supportive narrative, please share them in the comments.

The first step

By | Kinect | No Comments

So, today I made some progress. I had some trouble getting started with ofTheo’s ofxKinect addon at first. I could not find documentation for ofxKinect and I got a little confused by all the differend addons and techniques that people use. So I kind of reversed engeneerd the example that comes with ofxKinect. By stripping out allmost every line of code and add them one by one again, I figured out how it works.

It turns out that the ofxKinect acts as a normal video grabber. You have to initialise, open and close it. You can check if a frame is new and get the pixels of a frame. Just like a normal video grabber. There are just some new methods for checking distance for a perticular point, and instead of only the colored pixels. You can get the depth pixels for instance.

The most handy features –of what I’ve found so far– are the far- and neartreshold members. Almost like a focal point of a normal camera, exept with hard edges. Whatever is out of those boundries, just does not get registered. So the result of that image –a black background for that what is not ‘in focus’, and a greyscaled image for what that is– can easely be put in some sort of blobdetection. Like OpenCV. Ofcourse, that is wat happens in the example. The image that came out of the Kinect is put in an OpenCV image so it can be analyzed. From here on, ofxOpenCV takes over.

Once I understood that, it all made some sense and I was able to put the finger detection of daniquilez in. That just analyses an OpenCV image and mine happened to came out of a Kinect.

There are still some bumbs in the road though. The greyscale image that comes out of the Kinect is kind of jagged. I still need to find a way to smooth that out so that the fingerDetection can do a better job. All in all this was the first try to make some sense of it all just by hacking together some examples I found online.

I did found an blogpost of Patricio González Vivo as well. He made a hack to the famous NUIGroups CCV app. His app takes the Kinect as source and uses the same nice interface CCV has. I did not dive into the source code of Particio’s app –which he so kindly shares– yet to see if the blob’s that come out of it have a z (or depth) member. But when I do, I will try to fit in daniquilez’s finger detection as well.

I do not have a demo yet because it really is the first try, and things are a bit buggy at best. But I want to post some soon.