HoloLens Research Mode

 In Dev, HoloLens

With the new release of the operating system (10.0.17134.80) is now possible on HoloLens to access directly to the stream of all the camera by enabling the “Research Mode” on the device portal.

HoloLensForCV

HoloLensForCV is an open source project on GitHub born to help people use the HoloLens as a Computer Vision and Robotics research device.

In our case, we had to understand some strange behavior of the device for one of our customers and being able to access the video cameras was essential.

For our purpose we have developed an HoloLens application (Streamer) that shows on the HoloLens the stream of the four light sensors looping through them by the tap gesture and a Desktop Companion Application (Receiver) that shows all the four sensors stream at the same time.

The fork of the project with our final version is on GitHub (https://github.com/mvaloriani/HoloLensForCV)

The Streamer

To achieve the first goal, in the Tool/Streamer project we need to add the following lines in the AppMain.h:

void SwitchCamera();		
void InitializeSensorTypeArray();

//Used for looping through different SensorType
HoloLensForCV::SensorType _sensorTypeArray[4];
//_currentCamera: current index for the _sensorTypeArray
int _currentCamera;

In the AppMain.cpp the method that initialize the HoloLens media frame readers is StartHoloLensMediaFrameSourceGroup().

So, the first thing to do to make it all works is to enable the sensors we want to use, by adding them to the enabledSensorTypes vector that keep tracks of the sensor types that are enabled:

#if ENABLE_HOLOLENS_RESEARCH_MODE_SENSORS
	enabledSensorTypes.emplace_back(
	    HoloLensForCV::SensorType::VisibleLightLeftLeft);

        enabledSensorTypes.emplace_back(
            HoloLensForCV::SensorType::VisibleLightLeftFront);

        enabledSensorTypes.emplace_back(
            HoloLensForCV::SensorType::VisibleLightRightFront);

	enabledSensorTypes.emplace_back(
	    HoloLensForCV::SensorType::VisibleLightRightRight);
#else

The _sensorFrameStreamer = ref new HoloLensForCV::SensorFrameStreamer(); is used to enable for each enabled sensor type a SensorFrameStreamingServer and assign to each one a service name (in the Shared/HoloLensForCV/Sensor Frame Streaming/SensorFrameStreamer.cpp). The service name will be used later by the Receiver Companion App to retrieve the data coming from the stream.

In the same way, the

_holoLensMediaFrameSourceGroup = ref new HoloLensForCV::MediaFrameSourceGroup(
                _selectedHoloLensMediaFrameSourceGroupType,
                _spatialPerception,
                _sensorFrameStreamer);

It’s used to enable for each sensor type, the Sensor Frame Acquisition.

Which sensor is currently shown on the device is defined in the OnUpdate function of the AppMain.cpp class of the Streamer, by setting the HoloLensForCV::SensorType renderSensorType variable.

We want to loop through all the sensors we enabled, so in the AppMain.cpp we add two functions:

//Used for looping through the camera.
void AppMain::SwitchCamera() {
    _currentCamera++;
    if (_currentCamera > sizeof(_sensorTypeArray)) _currentCamera = 0;
}

//Initialize the _sensorTypeArray with the four sensor VisibleLight
void AppMain::InitializeSensorTypeArray() {
    _sensorTypeArray[0] = HoloLensForCV::SensorType::VisibleLightLeftLeft;
    _sensorTypeArray[1] = HoloLensForCV::SensorType::VisibleLightLeftFront;
    _sensorTypeArray[2] = HoloLensForCV::SensorType::VisibleLightRightFront;
    _sensorTypeArray[3] = HoloLensForCV::SensorType::VisibleLightRightRight;	

    //Initialize the _currentCamera to VisibleLightLeftLeft
    _currentCamera = 0;
}

When the user uses the tap gesture, the void AppMain::OnSpatialInput is raised, so we can call the SwitchCamera there.

In the OnUpdate function we can set the renderSensorType as:

#if ENABLE_HOLOLENS_RESEARCH_MODE_SENSORS
    //Get the current renderSensorType based on _currentCamera
    HoloLensForCV::SensorType renderSensorType = _sensorTypeArray[_currentCamera];
#endif

In the OnSpatialInput function it’s also called the _slateRenderer->PositionHologram, to repositioning the hologram two meters in front of the user. We suggest to edit the value to three meters, because we noticed that two meters are too close to the user.

The Receiver

In the Receiver, to connect to the sensor stream, we need to use the address of the HoloLens device and the service name defined in the Shared/HoloLensForCV/Sensor Frame Streaming/SensorFrameStreamer.cpp that we enabled in the Tool/Streamer with the sensorFrameStreamer.

Initially in the Tool/Receiver, the cases for other sensors besides the one of the sample were missing, so we needed to add them both in the Tools/Receiver/MainPage.xaml.cs in the OnServerConnectionEstablished and in the Shared/HoloLensForCV/SensorFrameReceiver.cpp inside the ReceiveSensorFrameAsync, otherwise we are not able to get the desired sensor streams.

/*Handling the Light Sensor cases */
case SensorType::VisibleLightLeftLeft:
    pixelFormat = Windows::Graphics::Imaging::BitmapPixelFormat::Bgra8;
    break;
case SensorType::VisibleLightLeftFront:
    pixelFormat = Windows::Graphics::Imaging::BitmapPixelFormat::Bgra8;
    break;
case SensorType::VisibleLightRightFront:
    pixelFormat = Windows::Graphics::Imaging::BitmapPixelFormat::Bgra8;
    break;
case SensorType::VisibleLightRightRight:
    pixelFormat = Windows::Graphics::Imaging::BitmapPixelFormat::Bgra8;
    break;

Now that we are able to get the data from all the sensors we want, we want to edit the image displayed, because the data coming from the light sensor are rotated and stretched by default in the sample.

We first add the RotateTransform to the Image:

RotateTransform rotateTransform = new RotateTransform();
rotateTransform.Angle = 90;
this._pvImage.RenderTransform = rotateTransform;

And then we add an ImageHelper class that we will use to resize the image and to avoid stretch. It uses the Win2D.uwp library (to install via NuGet):

public static class ImageHelper
    {
        //Helper method to resize SoftwareBitmap
        public static SoftwareBitmap Resize(this SoftwareBitmap softwareBitmap, float newWidth, float newHeight)
        {
            using (var resourceCreator = CanvasDevice.GetSharedDevice())
            using (var canvasBitmap = CanvasBitmap.CreateFromSoftwareBitmap(resourceCreator, softwareBitmap))
            using (var canvasRenderTarget = new CanvasRenderTarget(resourceCreator, newWidth, newHeight, canvasBitmap.Dpi))
            using (var drawingSession = canvasRenderTarget.CreateDrawingSession())
            using (var scaleEffect = new ScaleEffect())
            {
                scaleEffect.Source = canvasBitmap;
                scaleEffect.Scale = new System.Numerics.Vector2(newWidth / softwareBitmap.PixelWidth, newHeight / softwareBitmap.PixelHeight);
                drawingSession.DrawImage(scaleEffect);
                drawingSession.Flush();
                return SoftwareBitmap.CreateCopyFromBuffer(canvasRenderTarget.GetPixelBytes().AsBuffer(), BitmapPixelFormat.Bgra8, (int)newWidth, (int)newHeight, BitmapAlphaMode.Premultiplied);
            }
        }
    }

In the UpdateImages() function, we call the Resize method of the ImageHelper class with the desired width and height as parameters:

await imageSource.SetBitmapAsync(ImageHelper.Resize(_pvCamImgContext.ConvertedImage,w,h));

In our solution, we defined four Image, to see all together the four light sensors.

Matteo Valoriani and Francesco Clasadonte