Mobile Development and 3D Graphics - Part 5

Android Sensors

Android Sensors - Introduction

Using the Sensor API

To use the Sensor API, we:

Example of setting up the Sensor API

Here is a code extract showing the use of the previous API calls.

class SensorActivity: AppCompatActivity(), SensorEventListener {

    var accel: Sensor? = null
    
    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        
        val sMgr = getSystemService(Context.SENSOR_SERVICE) as SensorManager
        accel = sMgr.getDefaultSensor(Sensor.TYPE_ACCELEROMETER)

        sMgr.registerListener ( this, accel, SensorManager.SENSOR_DELAY_UI )
    
    }
    
    override fun onAccuracyChanged(sensor: Sensor, accuracy: Int) {
        
    }
    
    override fun onSensorChanged(ev: SensorEvent) {
        // Test which sensor has been detected.
        if(ev.sensor == accel) {
            // handle the accelerometer sensor
        }
    }
}

Accelerometer

Ref: Chapter 29 of "Pro Android 4" (Komatineni and MacLean)

Effect of gravity

Magnetic field sensor

Combining accelerometer and magnetic field values

Components of the orientations array

The three components of the orientations array are:

Azimuth and pitch

Coding an app to get the device's orientation using both sensors

How should this be coded? The simplest way is to have ONE listener for BOTH the accelerometer AND the magnetic field sensors, which includes logic to determine which sensor has been detected. In this method we:

Template example showing general logic of application to calculate the orientation (incomplete!)

In the exercises, you ned to complete this!

class SensorActivity: AppCompatActivity(), SensorEventListener {

    var accel: Sensor? = null
    var magField: Sensor? = null


    // Arrays to hold the current acceleration and magnetic field sensor values
    var accelValues = FloatArray(3)
    var magFieldValues = FloatArray(3)
    
    override fun onCreate(savedInstanceState: Bundle?) {
        // TODO
    }
    
    override fun onAccuracyChanged(sensor: Sensor, accuracy: Int) {
           // Leave blank 
    }
    
    override fun onSensorChanged(ev: SensorEvent) {

        // Test which sensor has been detected.
        if(ev.sensor == accel) {

            // Copy the current values into the acceleration array
            accelValues = ev.values.copyOf()

        } else if (ev.sensor == magField) { 

            // TODO ... do the same for the magnetic field values (not shown)

        }
    
        // TODO Calculate the matrix.
        
        // TODO Get the orientations.
    }
}

Other sensors

Other sensors use the same Sensor API. For example, you can use the Sensor API to obtain the light level (in lux), the air temperature (in Celsius), and air pressure (in millibars). In all cases, you obtain the reading via the first member of the values array, e.g. ev.values[0]. See here.

Remapping the coordinate system

Virtual and augmented reality are common applications of the sensors. In a virtual-reality environment, the device's screen shows a computer-generated representation of the world around you, e.g. a rendering of roads, parks, buildings, hills and so on. In augmented reality, computer generated features (e.g. roads and points of interest) are overlaid on the device's camera view.

In both cases (and particularly augmented reality) you'll want the display on your device to align with where you are facing in the real world. This will work in portrait mode without any further action as the axes of our current view of the phone align with the device's standard coordinate system. However in landscape mode there is a mismatch between the device's default coordinate system and our current view of it. This will cause problems if we want to draw on the screen, as coordinate systems for drawing (e.g OpenGL) are always relative to the CURRENT VIEW of the device. In landscape mode:

However this mismatches with the device's default coordinate system:

Therefore we need to remap the coordinate system so that:

We can achieve this through SensorManager.remapCoordinateSystem(). This remaps the rotation matrix (from SensorManager.getRotationMatrix()) to work with the device's current view. We need a new array for the remapped matrix. To remap from portrait to landscape we would do:

SensorManager.remapCoordinateSystem (originalMatrix, SensorManager.AXIS_Y, SensorManager.AXIS_MINUS_X, remappedMatrix)

The remapping is shown below.
Remapping the coordinate system

How to determine the current rotation of the device?

Ideally what we want is for our app to work not just in portrait and landscape, but also "upside-down portrait", with the normal bottom of the phone at the top and vice-versa, and "upside-down landscape", with the normal bottom of the phone on the left (rather than on the right). How do we do this? We can obtain the current rotation of the device using a WindowManager object. Here is how we do this:

Smoothing the sensors

You'll probably find the sensors are very "jittery" and change very rapidly. It is worth carrying out a process known as exponential smoothing to reduce the jitter.

To do this, when we get a new value from the sensor (obtained from ev.values in onSensorChanged()) we store, not the raw value, but the value multiplied by a *smoothing factor* plus the *previous value* multiplied by 1 minus the smoothing factor. In other words if *k* is the smoothing factor:

stored value = raw value*k + previous value(1-k)
Typically k is a small value (e.g. 0.05 or 0.075) (original refs below, but no longer available): so the previous value will count for significantly more than the new value. The previous value will itself be influenced by previous values, so a series of consistent values will count more than a wildly different value. Hence the readings are smoothed out as "noise" counts for less than consistent readings.

Exercise

  1. Create an app which has three TextViews representing the acceleration in the three axes of the phone/tablet. Make the Activity act as a SensorEventListener and set up the SensorEventListener in onCreate(). In your onSensorChanged(), update the three TextViews to hold the acceleration in the three axes. Your activity will need a float array (3 member) to hold the acceleration in the three axes. Put your phone/tablet flat on a surface in portrait mode and see what values you get. Rotate your phone/tablet so it's standing up in portrait mode. Now which value reads approximately 9.81? Why? Finally rotate your phone/tablet so it's standing up in landscape mode. Again, now which value reads approximately 9.81 and why?
  2. Combine the accelerometer and magnetic field sensors, as discussed in the lecture notes for this week and using the code example above to get started. You'll need to register both sensors, and make your onSensorChanged() handles both sensors. As shown in the lecture notes, copy the sensor values into the appropriate array (either the accelerometer values array or the magnetic field values array) and then calculate the rotation matrix. From the rotation matrix, calculate the orientations (azimuth, pitch and roll). Create three more text views to hold the azimuth, pitch and roll.
  3. Try rotating the device in different ways, e.g. rotating it while flat, "pitching" it up and down and "rolling" it on its side. You will need to declare float attributes for the orientations (azimuth/pitch/roll) (float array, 3 values for azimuth, pitch, roll respectively) and for the rotation matrix (a 16-member float array).Try running the app in landscape mode versus portrait. What happens when in landscape? Can you see why?
  4. More advanced: develop a compass app which shows which direction we are facing (N, NE, E, SE, S, SW, W or NW). Do this by working out what bearing each of the 8 compass points corresponds to, and then find which is closest to the azimuth.
  5. Advanced exercise: develop a simple light-sensitive torch. Here is some code to get you started, showing how to turn the torch on and off (using the camera flashlight). This code assumes the user is pressing "on" and "off" buttons. However you should adapt it to turn the torch on if it's off and at least 10 readings of 5 lux or less are received, and turn it off if it's on and at least 10 readings of more than 5 lux are received.
    import android.content.Context
    import android.hardware.camera2.CameraCharacteristics
    import android.hardware.camera2.CameraManager
    import androidx.appcompat.app.AppCompatActivity
    import android.os.Bundle
    
    class MainActivity : AppCompatActivity() {
    
        var cameraId: String? = null
    
        override fun onCreate(savedInstanceState: Bundle?) {
            super.onCreate(savedInstanceState)
            setContentView(R.layout.activity_main)
    
            // Get the CameraManager object
            val cMgr = getSystemService(Context.CAMERA_SERVICE) as CameraManager
    
            // filter the list of device cameras depending on whether flash is available
            val cameras = cMgr.cameraIdList.filter { cMgr.getCameraCharacteristics(it).get(CameraCharacteristics.FLASH_INFO_AVAILABLE)}
    
            // If there was a suitable camera...
            if(cameras.size > 0) {
                cameraId = cameras[0] // select the first camera with flash (arbitrarily)
    
                // Find on/off buttons using their ID
                val torchOn = findViewById<Button>(R.id.torchOn)
                val torchOff = findViewById<Button>(R.id.torchOff)
    
                // Add click listeners to both. We call the CameraManager's
                // setTorchMode() method, using the chosen camera, and passing in
                // true or false respectively.
    
                torchOn.setOnClickListener {
                    if(cameraId != null) {
                        cMgr.setTorchMode(cameraId, true)
                    }
                }
    
                torchOff.setOnClickListener {
                    if(cameraId != null) {
                        cMgr.setTorchMode(cameraId, false)
                    }
                }
            }
        }
    }