Wednesday, February 28, 2007

The new battlefield

Robots are getting smaller and smaller by the day. The first robots in SiFi movies where enormous, they where displayed as mechanical monsters who where operated by a evil genius who wanted to destroy the world or takeover the world. Asimov, also a SiFi writer, wrote the laws of robotics who stated that a robot should never harm a human not even if the operator intended it to harm a person.

I Still believe that the Asimov laws are to be followed however today we see robots who control borders and military facilities and who are equipped with machine guns and other weapons. In basic there is nothing wrong with a robot walking around a compound and leaving the operator in a warm and dry environment however at the moment that you arm them there is a difference. At least this is in my opinion.

Robots as they where displayed in the first SiFi movies have never become reality, the first robots developed where big however not as big as in the movies and now the trend is to make them smaller and smaller.

The latest robot in the small-robot series comes from France and is developed by SilMach and received a price from the France department of defence, the Prix Science et Defence. The Libellule is 3 centimeters long and has a weight of 120mg where 20mg go's to the wings. As with many of the small robots the battery lifetime is the main problem.

That the France department of defence has honored the researchers with a price indicates that they have a interest in small robots. This will most likely be the future of the battlefield in the upcoming years. Small robots who might work alone or in a cluster will scan the battlefield on a ground level or a just above ground level. At higher altitudes robots like the Predator will circle and give a bigger picture to the commanders while at the same moment other robots will scan for mines and IED's or patrol the perimeter of airfields and might even be used for House to House combat.

In the future there will be a big difference in the way how troops will behave in the battlefield, the difference will be between those countries who have the money to develop those kinds of new battle equipment and those who do not have the resources. If a conflict like Somalia will ever raise again it could very well be that the troops of the warlords will not even see the enemy troops from the western world. It could very well be that if a war like that flames up again in 20 years that the troops will send in robots who are controlled from a main control room aboard a ship a couple of kilometers out of the coast and which is well defended by a fleet of escort ships who will protect it against all possible attacks.

As I already stated I am against arming robots, there is a difference between standing on a battle field and looking at the other man before you decide to pull the trigger and sitting in a save environment between a screen in an airco controlled operating room. In my opinion the operator will decide to open fire a lot quicker than the man on the ground and even do all the systems are there to give the operator a clean and open view of the battlefield it can not make a decision like the man on the ground who can hear, see and smell everything the same way as his opponent.

I do not think it is a bad thing to use robots to clear mines, patrol airfields and do all kinds of recognisance tasks, I even have the feeling they might be better at this than a human but having them engage in a real war and in real combat is not the way to go.... at least not at this moment and not with the Asimov law in place. Thank god for the Asimov law..... If countries should have large robot armies the step to go to war would be so much smaller and I think the collateral damage, the dead under civilians should be a lot bigger.

When a President or Government has to send there own young man and woman out there and there is change they might end up in a body bag they have one more point to consider. If only damaged machines should come back home that would not even be a point of discussion when deciding to go to war. s

Tuesday, February 27, 2007

Trip to Prague


As you might know I have a new job... Thursday I will start as a Oracle Consultant at my new employer. This has given me the opportunity to travel to Prague for a couple of days. This is also the reason why I have not been updating this weblog for some time now.. If you would like to view the photos of my Prague trip take a look at flickr

Wednesday, February 14, 2007

Fire robot.




Agent 007 is a mighty versatile fellow, but he would have to take backseat to agents being trained at Washington University in St. Louis.

Computer scientist engineers here are using wireless sensor networks that employ software agents that so far have been able to navigate a robot safely through a simulated fire and spot a simulated fire by seeking out heat. Once the agent locates the fire, it clones itself - try that, James Bond -- creating a ring of software around the fire. A "fireman" can then communicate with this multifaceted agent through a personal digital assistant and learn where the fire is and how intense it is. Should the fire expand, the agents clone again and maintain the ring - an entirely different "ring of fire."

Agents in computer lingo are specialized pieces of code that are self-contained and mobile. Wireless sensor networks are made up of tiny computers that can fit in the palm of a hand. They can run on simple AA batteries, sport an antenna and a sensor with a specialized duty of sensing the environment -- temperature, magnetism, sound, humidity, for instance.

Gruia-Catalin Roman, Ph.D., the Harold B. and Adelaide G. Welge Professor of Computer Science and department chair, envisioned a new kind of software architecture to support applications targeted to the sensor network environment. Chenyang Lu, Ph.D., Washington University assistant professor of computer science and engineering and Roman's doctoral student Chien-Liang Fok, and Roman developed a middleware - a special kind of software -- called Agilla, which enables agents to move across the sensor network and between sensor networks connected via the Internet and to clone themselves, thus forming complex communities of cooperating agents.

This approach to the development of sensor network applications is novel and offers an unprecedented level of flexibility. It also permits multiple applications to co-exist over the same basic hardware in response to changing needs.

Roman believes that wireless sensor networks are poised to explode upon the world stage, similar to the way that the Internet took off after the creation of the World Wide Web.

"What researchers are banking on is that sensor networks will be so cheap to make that they can be employed on a very large scale," said Roman, who directs Washington University's Mobile Computing Laboratory. "This way you can spread hundreds and thousands of them around gathering data and communicating."

Imagine a farmer wanting to get soil data - temperature, or Ph - over hundreds of acres with slightly varying soil types. Instead of painstakingly physically taking measurements - being a farmer 'outstanding in his field' -- in theory, he could send a software agent with Ph sensing capabilities to a particular sensor network, have the Ph agent clone itself and gather the data over hundreds of acres, then transfer itself onto another sensor network on the Internet and send its data back to the farmer's office. That's not Old MacDonald's farm.

Similarly, a manufacturer might want to safeguard containers in a warehouse. A sensor network can be put in place on the containers that communicates with each other, alerting an alarm should, say, light be sensed, or a vibration. Again, the manufacturer can remain in his office and communicate with the network with a PDA.

"This is fascinating software, and this technology is opening up, and we have no idea where it's going to go, " Roman said. "Right now, wireless sensor networks are allowing us to explore the future."

One of the key features of Agilla software is its flexibility. For instance, in the fire-simulation study, the networks allow for both simulation of fire and tracking of the fire. Agilla is considered a major breakthrough in the field of wireless sensor networks and lays the foundation for rapidly developing applications.

The Aristo robot and navigation experiment are courtesy of Burchan Bayazit, Ph.D., Washington University assistant professor of computer science and engineering , and his graduate student Nuzhet Atay, of the University's Media & Machines Laboratory.

Crash during landing.

After some searching I finally found some pictures online of a unfortunate landing of my father in law. Here you can see 2 pictures of the landing and one a couple of hours after the landing. According to the story he told me the approach went without any problem, landing gear came out and gave all green however at the moment of touchdown one of the landing gears turned out to be not secured. One of the pins that should have secured the landing gear malfunctioned which resulted in the landing gear to be pushed back into the place.

As you can see on the pictures the result was that the plane tilted. Due to the fact they where able to reverse thrust of one of the engines and hard steering to one side the co-pilot and he where able to prevent the plane from performing a half turn at full speed on the runway.

The end results were some bruises and shocked passengers and a damaged plane. This all happened somewhere in march 1989 at the airport of Geneva.



Monday, February 12, 2007

Fly-by-Sight Microrobots


The fly's eye is one of the most well-organized units of visual optics in the world. But is it possible to understand how it works and reproduce it on board neuromimetic robots that could navigate in complete safety? This is the two-part question that Nicolas Franceschini's research team is trying to answer.

The more Nicolas Franceschini tells you about the research he's directing at the helm of the biorobotics department of the Movement and Perception laboratory in Marseille,1 the more you get the urge to stare out the window to watch flies. It's not that there's no buzz in the conversation of this expert in insect sight; quite the contrary. You listen to him enthusing about the amazing behavior of these “agile airships” that he's been studying for over thirty years and that today enable him to design “artificial flying creatures.” These are so efficient, that they make planes and helicopters pale with envy. You just can't help looking out to marvel at the arabesques of these insects buzzing past at high speeds without ever crashing.

But what's so special about the fly's eyes, that have made these insects champion stunt fliers for over 100 million years? First of all, answers Franceschini, “they're totally panoramic, so they can take in the whole of their surroundings.” Futhermore, the fly's “cockpit” contains about 1 million neurons powered by electric signals from the 48,000 photoreceptor cells that make up the mosaic of the retina. The neuron network processes these signals and sends the “electric flight controls” to 18 pairs of motor muscles that adjust the amplitude, frequency, and angle of attack of the wings in real time. This is what lets the fly change direction rapidly, escape from its predators, detect and follow a sexual partner with single-minded determination. The best robots produced today are nowhere close to replicating this kind of precision. Having deciphered how the movement-detecting neurons work, using microelectrodes and special microscopes, the team in Marseille managed to transcribe the main functions into miniature optoelectronic circuits which, 15 years ago, gave sight to non-flying robots that could avoid obstacles without help.

More recently, Franceschini discovered that the insect's retina, which is normally extremely stable, starts to vibrate actively in flight. To understand why this happens, the researchers did a computer simulation of the mechanism, reproduced it technologically and integrated it into a 100-gram twin-engine airship they called Oscar.

“The retina microscanning process allows the fly to keep its eye fixed on a target with a fine-tuning 40 times better than if the retina remained static,” says Franceschini. He's delighted that Oscar never gets flummoxed, in spite of all the devious ways that its designers find to torture it in their experiments (crosswinds, shocks to the fuselage, etc.). Its “vibrating eye,” connected to an electronic movement detector neuron, locks its sight onto the target “like a male fly chasing a female.” This implies possibilities to one day replace both the costly and heavy radars and energy hungry Lidars2 used on some helicopters to locate high voltage cables, by an anti-collision system derived from the fly's eye, that emits no radiation, is inexpensive, and featherweight.

But there's another flying insect-robot with “visuomotor intelligence,” that also has a bright industrial future.

Octave, a 100-gram helicopter, uses an eye in its underbelly that points earthwards, and is directly inspired by the fly's flight-control mechanisms. This allows it to take off and follow a slope at a speed of three meters per second, react to a head wind and land automatically, without any help from avionics (speedometer, altimeter, variometer) but using an “optic flow regulator.”3 This masterpiece could eventually be used as an automatic pilot on spacecraft and aircraft, and even on submarines observing the ocean floor.

These CNRS-patented results, which are at the crossroads of ethology, integrative neuroscience, and robotics do two things at once: They enrich our knowledge of living beings while giving rise to bionic machines inspired by age-old natural principles.

1. CNRS / Université de la Méditerranée joint lab.
The team received the prize “La Recherche 2005.”
2. For Light Detection And Ranging: A detector that emits a laser beam and receives the echo back.
3. “Optic flow” refers to the scrolling of the retinal image created by movement.
The optic flow regulator keeps the optic flow constant by acting on the flying height.

This article is originally done by Philippe Testard-Vaillant for CNRS magazine.

Friday, February 09, 2007

soft and mobile robots.

Tufts University biologists and engineers have launched an initiative to build robots with pliable parts — from the body down to the electronic components and wiring.

The soft-sided robots will be able to collapse into small packages and spring back to form as well as climb textured surfaces and burrow into confined spaces.

Flexible robots could do everything from explore inside a person's body to help doctors make a medical diagnosis, to examine industrial pipelines for damage. They could also be used to climb through hazardous zones such as inside a nuclear reactor or minefield, or repair delicate solar panels on space vehicles.

"In the next year or so we will be building machines that are squishy and can crawl," said Barry Trimmer, who, with David Kaplan, co-directs the Biomimetic Technologies for Soft-bodied Robots project.

Trimmer has spent years studying the nervous systems of caterpillars and silkworms to better understand how the creatures control their undulating motions with very simple brains.

He has identified individual cells in the brain responsible for certain movements.

Because the animals are flexible and lack skeletons, their movement is less limited than that of animals with skeletal systems. The human elbow, for example, can only bend in two directions.

Many rigid robots have similar kinds of joints.

But caterpillars can move in many directions, and don't need much of neural power to do it. Their muscles respond to simple signals from the brain, yet have the ability to twist and contort to move up a tree branch or reach under a leaf.

The idea is to build a robotic control system that mimics the nervous system and muscle power of the caterpillar.

Kaplan's department will conduct research on the soft materials meant to make up the body and electronics. Kaplan is applying genetic engineering and nanotechnology to produce flexible materials as tough as spider silk.

In the end, the team hopes to create "A robot that you can pick up and crumple into a ball in your hand, let it go, and watch it walk away," said Trimmer.

Having robots that lack rigid parts is a goal shared by other researchers, said professor Ian Walker of Clemson University in Clemson, S.C. But a lack of metal could mean a lack of power.

"Other groups, including our own in developing 'trunk and tentacle' robots, have aimed at producing completely soft robots, but have had to settle for having at least some rigid parts," he said.

"It will be interesting to see if they can get enough strength from the materials and components they plan to use. This has been a key issue."