Cyberithub

Simple agents, complex behaviours II: Coincidences [Robotics Tutorial]

Advertisements

Imagine you are a predator in the wild, trying to catch an elusive prey, perhaps you are a bat or an owl; a sound gives away its location, and a pursuit ensues. Both predator and prey use sound to localize the potential meal and the ongoing threat. The localization accuracy evokes an ostensibly complicated mechanism, yet most animal brains rely on a relatively simple principle called Interaural Time Differences (ITD). In doing so, they use their body morphology to solve the problem in a relatively straightforward way.

Simple agents, complex behaviours II: Coincidences [Robotics Tutorial]

Simple agents, complex behaviours II: Coincidences [Robotics Tutorial]

Also Read: Simple agents, complex behaviours I: Uncertainty [Robotics Tutorial]

As with animals in the wild, robots also benefit in some cases from the correct localization of sound sources, although fortunately not as predators. Instead, conventional applications like companion robotics use sound localization on a daily basis. In this article, we continue our study of simple bioinspired mechanisms that allow for the emergence of complex behaviours. The approach we use is that of neurorobotics, in which we take a closer look at the brain and its components to implement the required functionality.

 

The Jeffress Mechanism

The most widely known model of ITD detection in the brain was proposed by Lloyd Jeffress. It contains elegant ideas that are instantiated in many different algorithms dealing with event-time processing. Its working principle is simple - if you have two ears at your disposal, the relative difference in the signal's arrival time to each of them can be used to estimate the azimuth angle of the source.

This principle can be exploited even further to calculate the incoming sound's relative altitude and other properties. Variations of this theme show up in all species due to evolution's arms race between predators and prey. But let's keep it simple for now; the basic circuit contains three main components -  two sensors, two delay lines and a set of neurons with a particular topology shown in the figure:-

Simple agents, complex behaviours II: Coincidences [Robotics Tutorial] 2

Here is how it works. First, the two sensors are located on both sides of the body, so the signal arrives at a slightly different time. The sensors generate a signal that propagates along the delay line and excites the neurons as it passes; however, the excitation of a single line is insufficient to activate a given neuron, and only when the two signals coincide will an output signal be generated. The neurons act as coincidence detectors! Notice that the signal's arrival order is inverted from both sensors' perspectives.

There is a lot to unpack here, so let's go directly to the implementation in order to understand how the mechanism works and how the different components operate in concert to generate a directional signal.

 

The Sound source

The first step is to create a sound source. It can get very intricate to simulate sound's transversal waves in detail; therefore, wanting to keep things simple, we will use a sinusoidal source emanating from a point source as our environmental signal. We can achieve this with the equation:-

Simple agents, complex behaviours II: Coincidences [Robotics Tutorial] 3

 

Here, x0 and y0 is the position of the source, which evolves with a spatial frequency of k and a temporal frequency of omega. In the implementation, we only return the real part of the wave:-

Simple agents, complex behaviours II: Coincidences [Robotics Tutorial] 4

Here is the complete python code in source.py :-

import numpy as np

class Source(object):

    def __init__(self, x, y):

        self.k0 = 20.0
        self.w0 = 1.0
        self.x0 = x
        self.y0 = y
        self.s = lambda x, y, t: \
                 np.exp(1j*(self.k0*np.sqrt((x - self.x0)**2 + (y - self.y0)**2)))* \
                 np.exp(-1j*self.w0*t)

    def getSound( self, t ):
        return lambda x, y: np.real(self.s(x, y, t))

 

Neurons

The device that allows us to detect coincidences will be a neuron. This is the first time we have found a neuron in this series of articles so let's dedicate some time to analyzing their inner workings. The neurons, in this case, are slightly different to their usual machine-learning counterparts. They are different because they possess some intrinsic dynamics due to their membrane's electrical properties. The simplest model that captures such dynamics is called the integrate and fire neuron:-

Simple agents, complex behaviours II: Coincidences [Robotics Tutorial] 5

It is composed of two parts. The membrane dynamics evolve according to a first-order differential equation. Each time an input is received, the membrane's voltage increases a little until it reaches a predefined threshold, at which it fires an event and is reset to a resting state for the integration to start again. Hence the name integrate-and-fire.

In the implementation, we use Euler's method to integrate the equation and perform the corresponding validation in order to reset the potential u.

Simple agents, complex behaviours II: Coincidences [Robotics Tutorial] 6

Here is the complete python code in neuron.py :-

class Neuron(object):
    def __init__(self, pos, thres = 0.61):
        self.urest = 0.0
        self.tau = 1.0
        self.R = 3.0
        self.thres = thres
        self.pos = pos
        self.u = 0
        self.h = 0.1

    def step( self, I1, I2 ):
        self.u += self.h*(-self.u + self.urest + self.R*(I1 + I2))/self.tau

        if self.u > self.thres:
           self.u = self.urest
           return 1

        return 0

 

The Sensor

We now have a sound signal from the environment and a neuron element tuned so that it only fires an event when the signals from both ears coincide. Our robot now needs a sensor. The sensor will have two parts. A peak detection mechanism which measures the sign changes in the slope of the signal at a given position:-

Simple agents, complex behaviours II: Coincidences [Robotics Tutorial] 7

Here is the complete python code in sensor.py :-

import numpy as np

class Sensor(object):
    def __init__(self, x, y):
        self.pos = np.array([x, y, 1])
        self.pp = np.zeros(2)

    def transform( self, T ):
        self.pos = np.dot(T, self.pos)

    def sense( self, G, T = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]]) ):
        pos = np.dot(T, self.pos.T)
        p = G(pos[0], pos[1])

        if (self.pp[1] - self.pp[0])*(p - self.pp[1]) < 0:
        self.pp[0] = self.pp[1]
        self.pp[1] = p
        return 1 if self.pp[1] > 0 else 0

        self.pp[0] = self.pp[1]
        self.pp[1] = p 
        return 0

And a transduction mechanism that transforms the detected peak into a travelling wave along the delay line:-

Simple agents, complex behaviours II: Coincidences [Robotics Tutorial] 8

Here is the complete python code in delay_line.py :-

import numpy as np

class DelayLine(object):
    def __init__(self, x0, c):
        self.t = 1000
        self.w = lambda x: np.exp(-300.0*x**2)
        self.x0 = x0
        self.c = c

    def step(self, I):
        if I == 1:
        self.t = 0.0
        self.t += 0.01

    def read( self, x ):
        return self.w(x + self.c*self.t - self.x0)

The detection step is not too far from how the detectors in some animals work, although they are certainly more complex! Moreover, signals along nerves do travel as waves of discrete events called action potentials. Overall we can use the simplified pulse to excite the different neurons in passing.

 

Testing the mechanism

An initial test is in order. First, we start by simulating our sound source in a predefined time grid. For each time step, we perform a measurement step with the two sensors at a fixed location separated by what will be our body radius.

Simple agents, complex behaviours II: Coincidences [Robotics Tutorial] 9

Simple agents, complex behaviours II: Coincidences [Robotics Tutorial] 10

We then use the output of the sensors as input to the delay line. The value of the travelling pulse at fixed intervals in a one-dimensional domain activates our neurons.

Simple agents, complex behaviours II: Coincidences [Robotics Tutorial] 11

We now have two beautiful travelling waves and a seemingly effective coincidence detection mechanism:-

Simple agents, complex behaviours II: Coincidences [Robotics Tutorial] 12

Simple agents, complex behaviours II: Coincidences [Robotics Tutorial] 13

Simple agents, complex behaviours II: Coincidences [Robotics Tutorial] 14

Here is the complete python code in jeffress_test.py :-

import numpy as np 
import matplotlib.pyplot as plt 
from source import *
from neuron import *
from delay_line import *
from sensor import *

t = np.linspace(0, 100, 500)
plt.show(block = False)
xx = np.arange(-2,2,0.02)
yy = np.arange(-2,2,0.02)
X, Y = np.meshgrid(xx, yy)

fig, ax = plt.subplots(1, 3)

probot = 1.9
source = Source(1.0, 1.0)
sensor1 = Sensor(probot-0.05, 0.1)
sensor2 = Sensor(probot+0.05, 0.1)
line1 = DelayLine(0.0, -10.0)
line2 = DelayLine(1.0, 10.0)
x = np.linspace(0, 1, 100)

neurons = [Neuron(p) for p in np.arange(0, 1, 0.1)]
spike_counts = np.zeros_like(neurons)

for i in range(len(t)):
    G = source.getSound(t[i])
    Z = G(X, Y)
    r1 = sensor1.sense(G)
    r2 = sensor2.sense(G)
    line1.step(r1)
    line2.step(r2)

    ax[0].cla() 
    ax[0].contour(X, Y, Z, [0])
    ax[0].set_xlabel('x')
    ax[0].set_ylabel('y')
    ax[0].plot(sensor1.pos[0], sensor1.pos[1], 'k.', markersize = 7)
    ax[0].plot(sensor2.pos[0], sensor2.pos[1], 'k.', markersize = 7)
    ax[0].axis([0, 2, 0, 2]) 

    ax[1].cla()
    ax[1].plot( x, line1.read(x), 'k', linewidth = 3.0)
    ax[1].plot( x, line2.read(x) + 2, 'r', linewidth = 3.0)
    ax[1].set_xlabel('Delay line')
    ax[1].axis([0, 1, 0, 3])
    barc = ax[2].bar(range(len(spike_counts)), spike_counts, color = 'b')
    ax[2].set_xlabel('Neuron')
    ax[2].set_ylabel('# of spikes')

    for j in range(len(neurons)):
        sp = neurons[j].step(line1.read(neurons[j].pos), line2.read(neurons[j].pos))
        spike_counts[j] += sp
        barc[j].set_height(spike_counts[j])

    plt.pause(0.01)
plt.show()

Let's see now how it performs with a body.

 

Connecting the body

Given the component already implemented, a Braitenberg-style connection rule that orients the robot towards the source is easy to find. Note that, in this case, the sensor-actuator connection has an additional step or computation; it has a brain!

So we want the vehicle to turn left when the right-most neuron fires, as that means the signal arrived first to the left ear, traveled along the delay line and coincided with the one coming from the right ear later. Moreover, we want that the activation of each of the wheels to be proportional to the ordinal location of the neuron along a given line. Therefore, a simple connection rule we can use is:

Simple agents, complex behaviours II: Coincidences [Robotics Tutorial] 15

and,

Simple agents, complex behaviours II: Coincidences [Robotics Tutorial] 16

which we implement as follows:-

Simple agents, complex behaviours II: Coincidences [Robotics Tutorial] 17

Here is the complete python code in braitenberg.py :-

import numpy as np
import matplotlib.pyplot as plt
from matplotlib.patches import Circle
from neuron import *
from sensor import *
from delay_line import *

# Generic vehicle
class Vehicle(object):
    def __init__(self, x0, y0, theta0):
        self.pstep = 0.05
        self.ostep = 0.5
        self.x, self.y, self.theta = x0, y0, theta0
        self.scale = 0.1
        self.r = 1.0*self.scale # Wheel radius
        self.D = 2.0*self.scale # Diameter of the robot

        self.line1 = DelayLine(0.0, -10.0)
        self.line2 = DelayLine(1.0, 10.0)
        self.sensor1 = Sensor(self.r*np.cos(np.pi/4.0), self.r*np.sin(np.pi/4.0))
        self.sensor2 = Sensor(self.r*np.cos(-np.pi/4.0), self.r*np.sin(-np.pi/4.0))
        self.neurons = [Neuron(p, 0.5) for p in np.arange(0, 1, 0.1)]

    def getBodyTransform( self ):
        # Affine transform from the body to the work FoR
        return np.array([[np.cos(self.theta), -np.sin(self.theta), self.x],
                       [np.sin(self.theta), np.cos(self.theta), self.y],
                       [0.0, 0.0, 1.0]])


    def sense( self, G ):
        T = self.getBodyTransform()

        r1 = self.sensor1.sense(G, T)
        r2 = self.sensor2.sense(G, T) 
        self.line1.step(r1)
        self.line2.step(r2)

        l,r = 0, 0
        for j in range(len(self.neurons)):
            sp = self.neurons[j].step(self.line1.read(self.neurons[j].pos), 
                                      self.line2.read(self.neurons[j].pos))

            l += sp*(1 - self.neurons[j].pos)
            r += sp*self.neurons[j].pos

        print(self.neurons[1].u)
        return l, r


    def wrap( self, x, y ):
        if x < 0:
            x = 2.0
        elif x > 2.0:
            x = 0.0

        if y < 0:
            y = 2.0
        elif y > 2.0:
            y = 0

        return x,y

    def updatePosition(self, G):
        # Main step function
        # First sense
        phi_L, phi_R = self.sense(G)
        # Then compute forward kinematics
        vl = (self.r/self.D)*(phi_R + phi_L)
        omega = (self.r/self.D)*(phi_R - phi_L)

        # Update the next statefrom the previous one
        self.theta += self.ostep*omega
        self.x += self.pstep*vl*np.cos(self.theta)
        self.y += self.pstep*vl*np.sin(self.theta)
        self.x, self.y = self.wrap(self.x, self.y)

        return self.x, self.y, self.theta

    def draw(self, ax):
        T = self.getBodyTransform()
        sp1 = np.dot(T, self.sensor1.pos)
        sp2 = np.dot(T, self.sensor2.pos)
        left_wheel = np.dot(T, np.array([0, self.D/2.0, 1]).T)
        right_wheel = np.dot(T, np.array([0, -self.D/2.0, 1]).T)

        # drawing body
        body = Circle((self.x, self.y), self.D/2.0, fill = False, color = [0, 0, 0] )
        # Drawing sensors
        s1 = Circle((sp1[0], sp1[1]), self.scale*0.1, color = 'red' )
        s2 = Circle((sp2[0], sp2[1]), self.scale*0.1, color = 'red' )
        w1 = Circle((left_wheel[0], left_wheel[1]), self.scale*0.2, color = 'black' )
        w2 = Circle((right_wheel[0], right_wheel[1]), self.scale*0.2, color = 'black' )

        ax.add_patch(body)
        ax.add_patch(s1)
        ax.add_patch(s2)
        ax.add_patch(w1)
        ax.add_patch(w2)

 

Agent

Now is the time to put everything together in a Braitenberg vehicle. We reuse the code from the previous articles with a different "sense" method. In the new method, we perform the measurement at the body-transformed location of the sensors, activate the delay line and compute the motor commands as explained before:-

Simple agents, complex behaviours II: Coincidences [Robotics Tutorial] 18

The result is acceptable. The agent successfully orients towards the source and pursues indefinitely as no other behaviour has been specified.

Simple agents, complex behaviours II: Coincidences [Robotics Tutorial] 19Simple agents, complex behaviours II: Coincidences [Robotics Tutorial] 20

Here is the complete python code in jeffress_agent.py :-

import numpy as np 
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D 
from braitenberg import *
from source import *

source = Source(1.0, 1.0)
t = np.linspace(0, 100, 500)

xx = np.arange(-2,2,0.02)
yy = np.arange(-2,2,0.02)
X, Y = np.meshgrid(xx, yy)

fig, ax = plt.subplots(1, 1)
plt.show(block = False)
vehicle = Vehicle(0.2, 0.2, np.pi/2.0)

P = np.zeros((2, len(t)))

for i in range(len(t)):
    G = source.getSound(t[i])
    Z = G(X, Y)
    ax.cla()
    ax.contour(X, Y, Z, 0)
    ax.axis([0, 2, 0, 2])

    P[0, i], P[1, i], _ = vehicle.updatePosition(G)
    ax.plot(P[0, 0:i], P[1, 0:i], 'k.', markersize = 5.0)
    vehicle.draw(ax)

    plt.pause(0.1)

 

Conclusion

We have implemented the Jeffress model for sound localization. Such a mechanism is both simple and remarkably useful when used together with other algorithms for sound processing. However, the robot implementation is not as smooth as expected. This is due to the fact that we are using the raw discrete events or spikes to activate the wheels directly. A complete cognitive architecture should include intermediate mechanisms to guarantee a smooth motor output, some of which will be studied in future articles.

 

References

  1. Campbell, R. A., & King, A. J. (2004). Auditory neuroscience: a time for coincidence?. Current Biology, 14(20), R886-R888.
  2. Gerstner, W., Kistler, W. M., Naud, R., & Paninski, L. (2014). Neuronal dynamics: From single neurons to networks and models of cognition. Cambridge University Press.

Leave a Comment