Bilateral teleoperation over network (source code and video)

It’s been more than a year since I made a post as I’ve been quite busy to be honest. Thankfully, I passed my PhD viva and I’m currently working as a researcher at King’s College London. Currently on my way to come up with a research plan, nevertheless, due to other various tasks I do in parallel, I managed to create some C++ code for bilateral teleoperation, i.e. teleoperation with force feedback using two Geomagic Touch devices by 3DSystems over a network. You can find the code on Github.

The bilateral teleoperation system uses a position – position configuration. This means that the angular values of the joints are only exchanged between the devices. The upside is that there is no need for force sensors and external tools, as required by the position-force configuration, to receive forces from the remote environment. This conveniently keeps the system nice and simple but also more functional as all the moving parts of the robotic arm will respond to any change imposed by the remote environment. The downside is that in terms of transparency (quality of telepresence) during contact it is just not as good as the alternative.

The touchp2p.cpp file once compiled on each PC where the Touch devices are connected, runs a PID controller that receives reference angular values of the joints of the other device. It is only required to set the correct IPs of the remote and local machine on each side of the teleoperation system. Continue reading “Bilateral teleoperation over network (source code and video)”

Towards Haptic Communications over the 5G Tactile Internet

Hello everyone,

This is my first post for 2018 (hooray!) and hopefully not the only one. I am writing this blogpost because a paper in which I was a leading author (together with some amazing co-authors) recently got published on IEEE Communications Surveys & Tutorials (IEEE COMST). Yes, it is a survey paper, it took me a really long time to write it and that’s why it needs a special blog post.

The paper is called “Towards Haptic Communications over the 5G Tactile Internet” and is a survey on the most important technologies and methodologies that will allow teleoperation with haptic feedback over 5G networks to become reality in the future.

One thing I found difficult when writing this paper was that because the topic is multidisciplinary, I had to find the correct balance to mix “all the cuisines in a single plate” (thanks Aravindh!). I had to satisfy the COMST audience, mostly interested in the 5G networking aspect, but also mention important aspects and challenges of bilateral control systems theory and haptic data processing.

Haptic Communication challenges
All cuisines in a single plate

You can either download a pre-print version of the paper here (link is under “Documents” section), or you can visit IEEE Xplore if you have access.Β  Same thing basically.

Last but not least, many special thanks to all co-authors for their valuable contributions, help and support, but, also to the IEEE COMST anonymous reviewers for their suggestions that really helped in improving the manuscript.

Controlling a UR3 robot with gestures over a network

Hello everyone!

Thankfully, it didn’t take me a year to write a 2nd post! This time it’s about a part of a project I worked on the last 2-3 months (in parallel to my PhD studies of course :P). The topic is again teleoperation, this time only using hardware (last post was about using a virtual environment) such as the UR3 robotic arm below (looks great doesn’t it?).

UR3

The project I contributed to was demonstrated in Mobile World Congress 2017 in Barcelona at one of the booths of Ericsson. Why Ericsson? Because King’s College London (where I study) and Ericsson collaborate on standardizing 5G. I must say it was a pleasure working with everyone who participated.

Needless to say it was a really tiring week as the event hit a record of 108,000 visitors, but what an experience it was…just epic! I also had the chance to meet and discuss with many interesting and amazing people. For more information there is a CNET article with a bit of a demonstration as well πŸ™‚

Anyway…with the amount of time I had available to learn how to use ROS etc and make the robot move, I could only create a gesture system that receives position commands from a client. The positions the robot could move were pre-defined. I wish I had more time to make a direct control application (with speed and range of motion limiters of course).

The juice

You can download the Python script here. So…to make the robot move we used Linux (Ubuntu 14.04 specifically) with ROS. I said “we” because ROS was not used only for making the robot move. Anyhow, to make the app run you need to:

  1. Create a catkin workspace using ROS.
  2. Download the ROS-Industrial universal robot meta-package.
  3. Download ur_modern_driver and put everything inside the workspace’s src folder.
  4. Compile with catkin_make (yes, you will probably need to install many ROS dependencies).
  5. And then open a terminal to launch ROS with:

    $ source path/to/workspace/devel/setup.sh

    $ roslaunch ur_bringup ur3_bringup.launch robot_ip:=xxx.xxx.xxx.xxx (apply the correct robot IP)

  6. Open another terminal to run the application:

    $ source path/to/workspace/devel/setup.sh

    $ rosrun ur_modern_driver network_move.py

 

Again, as with my previous post…not sure if this is helpful to anyone but it’s good to have it documented somewhere πŸ™‚

A simple demo on the impact of latency in teleoperation

Hello everyone!

It’s been so long since I wrote a blog post. You can’t imagine how much I’ve been waiting for a reason to start writing and…here it is!

Since the last time I wrote something, I finished with my MSc degree at NKUA and started my PhD at King’s College London. I am now studying and working on haptics over the upcoming 5G network infrastructure. How they work, how to improve it and how to make it more usable for a number of use cases are a few questions I’m looking to answer.

The demo

I’m happy to say that I have just uploaded the first demo I ever made since I joined the KCL-Ericsson 5G lab. It’s rather simple but gets the job done. It also doesn’t fail to impress people who don’t know how it’s like to teleoperate something under latency.

So, here it is: https://github.com/constanton/DelayedTeleoperationDemo

It’s actually a modified version of one of the examples provided by the Chai3D C++ framework that I’ve been using lately. The modification was simply creating one buffer at the position channel and another one at the feedback channel. As you increase the latency (ie. the size of the buffers) above 10ms, the end result is the de-synchronization of the data you receive and the data you send making the haptic device to become unstable…it really starts to “kick” when you touch anything with the grey ball.

The haptic device used is the Sensable Phantom Omni (IEEE 1384 version) which works only under Windows (at least for me). So, in case anyone has made it work under Linux, if possible, please send over a how to πŸ™‚

There is room for improvements, further modifications and optimizations. One idea is to implement at least one stability control algorithm to compare it to the usage without one.

Anyway, here is a pic from the application.

Application screenshot
You can slide the cylinder along the string using the grey ball which you control from the haptic device. You can change the latency (bottom right) with + and – keys.

Change a Font Awesome icon on hover (using content) + Sopler news!

Hi everyone! The past few days, we made some major updates on Sopler that we started designing a long time ago.

It is now possible to set a due date or edit your items using a brand new options menu. Also, when you enter a YouTube link, a (auto-scalable) player will appear on the list! πŸ™‚

Nonetheless, this post concerns changing a Font Awesome icon to another Font Awesome icon when the first one is on hover.

Firstly, I came across this post and a few (unrelated but helpful) answers on Stack Overflow that used the content property. Then, I thought that this might work pretty well and it did.

For example,

  <div class="divclass">
    <i class="fa fa-circle-o"></i>
  </div>

using this CSS:

.divclass{
  font-size:5em;
  color:grey;
  cursor:pointer;
}

.divclass:hover .fa-circle-o:before{
  content:"\f05d";
  color:green;
  opacity:0.4;
}

OK, the div element will be a full-width rectangle (use your Developer Toolbar to check what’s going on), but you can modify it later. Anyway, the result is: http://jsbin.com/noqiwi/

It might be trivial but it’s also a lot easier than other implementations I’ve seen so far.

An implementation of a person (and object) re-identification method

Hi! I would like to present you my latest upload on GitHub:

https://github.com/constanton/bLDFV

It’s the implementation of a research publication on human re-identification [1] in C++ (…with a very minimal OO design though).

This programme was created for academic purposes (!) and it can most probably also be used to re-identify other (similarly distinctive) objects as well, although this has not been tested. It divides the image into 3×4 blocks and uses very simple features (HSV values, first and second order derivatives).

As you will see on the GitHub page, the programme uses the open source OpenCV and VLFeat libraries. Also, it was developed on a Fedora 19 64bit machine using the Eclipse IDE.

I am aware that without proper documentation, the learning curve for using this programme might be steep but I hope I will find time to prepare something. All configuration happens inside the config.xml file. Using this file, after the training procedure (function zero), the programme creates a gmm_parameters.xml file which must be used aftewards during all other program functions (one, two and three) in order to produce the fisher.xml files that contain the image’s Fisher vectors and the .csv files that contain the Euclidean distances. The results are very similar to those of the publication using the same evaluation methods.

Nonetheless, improvements can be made. For example, during the training process, there is no random selection of the image features. This will improve the performance of the programme. Various other improvements are also possible.

[1] B. Ma, Y. Su, and F. Jurie, β€œLocal descriptors encoded by fisher vectors for person re-identification” in ECCV Workshops (1) (A. Fusiello, V. Murino, and R. Cucchiara, eds.), vol. 7583 of Lecture Notes in Computer Science, pp. 413–422, Springer, 2012.

A Creative Commons music video made out of other CC videos

Hello! Let’s go straight to the point. Here is the video:

…and here are the videos that were used having the Creative Commons Attribution licence: http://wonkydollandtheecho.com/thanks.html. They are downloadable via Vimeo, of course.

Videos available from NASA and the ALMA observatory were also used.

The video (not audio) is under the Creative Commons BY-NC-SA licence, which I think is quite reasonable since every scene used from the source videos (ok, almost every scene) has lyrics/graphics embedded on it.

I hope you like it! I didn’t have a lot of time to make this video but I like the result. The tools I used are not open source unfortunately, because the learning curve for these tools is quite steap. I will definitely try them in the future. Actually, I really haven’t come across any alternative to Adobe After Effects. You might say Blender…but is it really an alternative? Any thoughts?

PS. More news soon for the Sopler project (a web application for making to-do lists) and other things I’ve been working on lately (like MQTT-SN).

2013 in review powered by WordPress

The WordPress.com stats helper monkeys prepared a cute annual report for this blog for 2013 πŸ™‚ For 2014 I’ve bought tickets for FOSDEM 2014 to represent Sopler and share what cool things can happen with it.

Right now, I’m taking a look at Mozilla’s version of the l10n.js library in order to localise Sopler (at least on FirefoxOS).

Well, that’s it! Have a great 2014! By the way, for this year, the 3-years-in-a-row champion is again “My Fedora 15 tweaks for an SSD” (ok, it accumulates all previous years but nevertheless it still gets views). I guess they’ve become even more popular! (both Fedora and SSDs).

PS. A weird thing to say but, if anybody works for a new wave/post-punk/indie rock record label, comment or PM me! I got good news for you from Wonky Doll and the Echo.

Click here to see the complete report.

Making a Tabzilla clone using Bootstrap

Hi, as you might have noticed Sopler has a navigation menu very similar to that of Mozilla’s Tabzilla…but it’s not πŸ™‚

I think Tabzilla is a great tool but while making Sopler’s front-end with Bootstrap I faced a problem integrating it. I had to use a string in order to place HTML inside my menu. That’ss very restrictive. But, I found a similarity between a component of Bootstrap and Tabzilla!

I have no idea if someone else has found that out too, but it’sΒ  the same effect with Bootstrap’s accordion. It expands & retracts.

In this post I will use the template of the navbar-fixed-top example from Bootstrap 3. Go get it here (Download source). It’s in the examples folder. Tabzilla is not fixed so I think we have an improvement here. Anyway, someone might find it too much, so, to implement my example on the navbar-static-top, you won’t need the javascript code at the end of this post. Here is a demo for navbar-static-top (not the navbar-fixed-top I mentioned before): http://jsbin.com/EkAweqA/1/ (press the Menu button)

The whole trick is to place the “accordion-body” div above the “accordion-heading” and then write a few lines of Javascript to change the navbar’s position from fixed to relative….and that’s it!

Go to the navbar-fixed-top example, open index.html and replace everything between:


<!-- Fixed navbar -->

and


<div class="container">

<!-- Main component for a primary marketing message or call to action -->

(this is NOT the first div class=”container”Β  you’ll find, it’s the one in line 69)

<!--- This will not show up until you press the Menu button --->
<div id="login_box" class="login-box">
   <div class="accordion" id="accordion2">
      <div class="accordion-group">
         <div id="collapsemenu" class="accordion-body collapse">
            <div class="accordion-inner">
		    <div class="row inner_box">
			<div class="col-md-12">
			    <h2>Put any HTML goodies you want here!</h2>
			</div>
		    </div>
	    </div>
         </div>
      </div>
   </div>
</div>

     <!-- Fixed navbar (To make it static simply change to navbar-static-top)-->
    <div class="navbar navbar-default navbar-fixed-top">
      <div class="container">
         <ul  class="nav navbar-nav list-inline" style="float:right; margin-right: 5em; white-space: nowrap;">
   	<div class="accordion-heading">
        </div>
	 <button type="button" id="menu_button" class="btn btn-primary navbar-btn" data-toggle="collapse" data-parent="#accordion2" href="#collapsemenu">Menu</button>
</ul>
	<a  class="navbar-brand" href="#">Brand name</a>
      </div>

    </div>

Now, we need to create two more classes at the navbar-fixed-top.css (it’s next to index.html inside the navbar-fixed-top folder) :

.login-box{
white-space:nowrap;
top:0;
width:100%;
background:#fff;
overflow: hidden;

}

.inner_box{
margin:0 auto; max-width:57em; text-align:center;
}

Of course you can style it the way you want! And finally let’s go to the footer of the page (under the jquery.js call) and put this script (not useful for the navbar-static-example):

<script type="text/javascript">
$(document).ready(function(){
var isopen="";
$('#menu_button').click(function(){
if (!open){
$('.navbar-fixed-top').css('position','relative');
$('html,body').animate({ scrollTop: 0 }, 'normal');
}else{
$('.navbar-fixed-top').css('position','fixed');
}
isopen=!isopen;
});
});
</script>

Let’s explain the script! We use the isopen variable (line 3) which is set to false by default. It’s working like a switch. If the menu is not open and we click the Menu button then the page scrolls to the top and the position of the bar is set to relative. If we click the button again then the navbar’s position is se to fixed. We don’t need to scroll because when the menu is open the bar won’t move from its place anyway (that’s why we used relative).

If all goes well, when you click the Menu button it should look like that:

Introducing Sopler, a new open web application!

Here are some exciting news: The last two of months I’ve been part of a promising project. We made a web application that uses open standards (and it’s open source). It’s called Sopler and we are the Sopler project. I worked on the UI/UX design partΒ  (meaning, HTML5, CSS3, JQuery etc) and oh boy…I’ve learned a lot of new stuff (like Bootstrap 3, FirefoxOS etc) and there are lot’s of things to post on this blog in the near future.

What can you do with Sopler? Make a list, add some items and share the link with your friends.They can add or check some items, make comments etc…

Sopler is still in Beta but you can check our code on Github. Enjoy our new video (with english & greek subtitles available):

What can’t you see in the video? Sopler can “remember” all previous lists of an authorized user (who signed in using a social account), plus, his privileges over any item are greater than those of a non-authorized user.

That’s because a social account profile makes a user unique and in this way it’s easy to map a user to his list.

Yes, Sopler is on many social networks too: Facebook, Twitter & Google+

So, see you soon with more news πŸ™‚