November 2016

Volume 31 Number 11

[Azure IoT Hub]

Capture and Analyze Brain Waves with Azure IoT Hub

By Benjamin Perkins

The brain is the engine that interprets simultaneous input from many sources and many interfaces, and then triggers some kind of action or reaction. Sources like a flower, the sun, or firecrackers and interfaces like smell, sight, and sound can trigger calmness, a physical movement into the shade, or a jerk reaction to a loud sound. As of now, a dependable algorithm like this doesn’t yet exist because of the massive number of variables between the source, the human and the interface.

The way to derive this complex algorithm is to get a better under­standing of how the brain acts and reacts in numerous situations, like smelling a flower, being in the burning sun, and unexpectedly hearing firecrackers. This article describes how to get more insights on how the brain functions in given scenarios in hopes to someday define an algorithm that reacts dependably in multiple unexpected situations.

Capturing, Storing and Analyzing Brain Waves

The project described in this article uses numerous technologies to capture, store and analyze brain waves, each of which are briefly described in Figure 1. The first two components—capturing the brain waves and storing them into an Azure IoT Hub—are described in the next sections of this article. The remaining three components will be explained in Part 2 of this article in a future issue of MSDN Magazine.

Figure 1 Components of the Brain Analyzation Project

Components Role Brief Description
Emotiv Insights SDK Capture A brain interface that converts brain waves to numbers
Azure IoT Hub Storage Temporary storage queue for IoT-device-rendered data
SQL Azure Storage Highly scalable, affordable and elastic database
Stream Analytics Storage An interface between Azure IoT and SQL Azure
Power BI Analysis Data analysis tool with simple graphic-based support

The sections are broken down by components, each one including a functional and technical description, plus the details of coding or configuration requirements. The different portions of the solution are ordered in the manner in which I created them; however, it’s possible to create them in many different ordered sequences. The technical goal is to upload the brain waves collected using a brain computer interface (BCI), store them into a SQL Azure database and analyze the data with Power BI.

Capturing the Brain Waves

When the brain receives input and needs to respond, it decides how to answer by firing electrical currents between neurons called neural oscillations. These neural oscillations are physical movements that cause real, recordable vibrations of different intensity and from different locations in the brain.

An electroencephalograph captures these vibrations and only in the past few years have companies started creating an affordable BCI to capture these brain activities. (There’s a list of many of those companies and devices at bit.ly/2c7j4fw.) Additionally, several of these companies have created an SDK for their devices that allow for real-time visualization and storage of brain activity.

I wrote a short post about my initial intent of putting my brain waves into Azure; you can read it at bit.ly/294Hi4R. Notice that I’ve chosen the Emotiv Insight BCI for this project. This BCI has five electrodes (AF3, AF4, T7, T8, and O1) with each providing state-of-mind readings on five different brain frequencies, as shown in Figure 2.

Figure 2 Emotiv Insight BCI Readings

Brain Frequencies State of Mind
ALPHA Relaxed, high levels of creativity, reflective
LOWBETA Social activities, excitement, alert
HIGHBETA Focus, quick thinking, working
GAMMA Optimal frequency for thinking, active thought
THETA Sleep, drowsy, meditative and dreaming

The Emotiv SDK is downloadable from GitHub (github.com/Emotiv) and is easily configurable; this example uses the community-sdk version. While configuring the C# version of the SDK to run with Visual Studio, there were three “gotchas” that were not intuitive:

  1. You need to pay close attention to the “bitness” of your Visual Studio project and that the bitness property targets the bitness of the components in No. 3.
  2. Make sure the DotNetEmotivSDK.dll is compiled to the same bitness as No. 3.
  3. You need to manually copy the edk.dll and the glut32.dll/glut64.dll into the solutions working directory, for example: /bin/Debug or /bin/Release.

To begin, navigate to the C# project in the community-sdk-master\­examples\C# folder and open the DotNetEmotivSDK solution in Visual Studio. Set the DotNetEmotivSDK project as the startup project by right-clicking on the project and selecting Set as StartUp Project, then compile the project by pressing Ctrl+Shift+B. Pay special attention to the Platform Target and make sure to keep it consistent during the configuration of the SDK. You should choose either x86 or x64.

Next, create a new console application in Visual Studio and add a reference to the DotNetEmotivSDK.dll that was created during the compilation of the DotNetEmotivSDK project by right-clicking on References and navigating to the ex: \obj\x86\Release directory and selecting the just compiled binary file. Last, copy the edk.dll and the glut*.dll file to the same working directory as the DotNetEmotivSDK.dll was placed. There are numerous copies of the edk.dll and glut*.dll. Choose the binaries contained in this location of the SDK, community-sdk-master\bin\win32, if you’ve compiled everything to 32-bit, otherwise, choose the 64-bit version.

Once the SDK is properly configured and your new console application is ready, place using Emotiv in the Program.cs class to reference the capabilities in the library. If desired, view the Brain­ComputerInterface project in the downloadable example code. Pay special attention to GetHeadsetInformation as this is where some pre-validation of the BCI device is executed.

The GetHeadsetInformation method subscribes to the EmoState­UpdatedEventHandler, which is triggered when the ProcessEvents method of the EmoEngine class is called. The GetHeadsetInformation method continues to call ProcessEvents within a while loop until bool stopHeadsetInformation is set to false. When the EmoState­UpdatedEventHandler is triggered, it executes the engine_EmoState­Updated method, which checks the battery level and signal strength. It’s important to the validity of the collected BCI data that the battery has an acceptable charge and that there’s an adequate Bluetooth 4.0 LE connection between the BCI contacts and the computer.

In the source code, the capturing of the BCI data doesn’t begin until those two measurements pass an adequate threshold, for example, chargeLevel > 1 && signalStrength > EdkDll.IEE_Signal­Strength_t.BAD_SIG. As long as the signal strength is greater than IEE_SignalStrength_t.NO_SIG, where NO_SIG means there’s no signal, the device is considered functional, but not optimal, therefore the signalStrength must equal at least GOOD_SIG before proceeding. Additionally, the maxChargeLevel is five and current charge level greater than one reflects a functional state. The code capturing the brain waves, the battery level, the signal strength and the contact quality for each of the electrodes is shown here:

EmoState es = e.emoState;
EdkDll.IEE_SignalStrength_t signalStrength =
  es.GetWirelessSignalStatus(); es.GetBatteryChargeLevel(
  out chargeLevel, out maxChargeLevel);
WriteLine($"AF3: {(int)es.GetContactQuality(
  (int)EdkDll.IEE_InputChannels_t.IEE_CHAN_AF3)}");
WriteLine($"AF4: {(int)es.GetContactQuality(
  (int)EdkDll.IEE_InputChannels_t.IEE_CHAN_AF4)}");
WriteLine($"T7:  {(int)es.GetContactQuality(
  (int)EdkDll.IEE_InputChannels_t.IEE_CHAN_T7)}");
WriteLine($"T8 : {(int)es.GetContactQuality(
  (int)EdkDll.IEE_InputChannels_t.IEE_CHAN_T8)}");
WriteLine($"Pz : {(int)es.GetContactQuality(
  (int)EdkDll.IEE_InputChannels_t.IEE_CHAN_O1)}");

Caution: The BCI can attain readings from the electrodes even though the contact quality is bad. When some of the electrodes are working and capturing data, other electrodes might not be, which is not an ideal situation because the conclusions made later from the data analysis can be wrongly interpreted if all the electrodes are not fully functional during the session. There’s no code in the sample to measure and confirm that all electrodes are functional; nevertheless, this should be the case before storing the measurements. An alternative to coding the logic to confirm all electrodes are fully functional prior to running the code is to use the online Emotiv CPANEL, which is accessible from bit.ly/1LZge5T. There, you’ll see something similar to Figure 3.

Validate Electrodes on the Brain Interface with BCI
Figure 3 Validate Electrodes on the Brain Interface with BCI

Once the engine_EmoStateUpdated method confirms the BCI is functional, it sets stopHeadsetInformation = false, which breaks the while loop in the GetHeadsetInformation method. The C# code to read the frequencies from each of the electrodes is illustrated in Figure 4 and is found in the GetBrainInterfaceMeasurements method. The method first creates a single dimensional array of type EdkDll.IEE_DataChannel_t with five reference elements, one reference element per electrode on the device. Finally, program loops through each of the five electrodes and outputs the frequency strengths to the console. Notice that the GetAverageBandPowers method of the EmoEngine class accepts the channel\electrode (channelList[i]) and the frequency variables (theta, alpha, low_beta, high_beta and gamma) into which the numeric representation of the brain wave is to be stored. Each of the readings together with the electrode are rendered to the console window using the static WriteLine method found in the System class.

Figure 4 Reading the Frequency Values of Brain Interface Electrodes

EmoEngine engine = EmoEngine.Instance;
EdkDll.IEE_DataChannel_t[] channelList = new EdkDll.IEE_DataChannel_t[5]
{
EdkDll.IEE_DataChannel_t.IED_AF3,
EdkDll.IEE_DataChannel_t.IED_AF4,
EdkDll.IEE_DataChannel_t.IED_T7,
EdkDll.IEE_DataChannel_t.IED_T8,
EdkDll.IEE_DataChannel_t.IED_O1
};
while (true)
{
  for (int i = 0; i < 5; i++)
  {
  engine.IEE_GetAverageBandPowers(0, channelList[i],
    theta, alpha, low_beta, high_beta, gamma);
  WriteLine($"Channel: {channelList[i]}");
  WriteLine($"Alpha: {alpha[0].ToString()}, Low Beta: {low_beta[0].ToString()}, " +
    $"High Beta: {high_beta[0].ToString()}, Gamma: {gamma[0].ToString()}, " +
    $"Theta: {theta[0].ToString()}");
  }
}

The console application requires that you have an Emotiv Insight BCI and a valid Bluetooth connection with it. Regardless of the chosen BCI, the principles are the same in these ways:

  • Before you begin capturing and storing the data make sure the device is in an optimal and consistent state so that all recorded data is gathered in the same manner.
  • Understand how the electrodes are configured and what they measure, then access the measurements and display them, for later storage and analysis.

Once you have the console application working and writing results to the console window, continue to the next section, which discusses how to configure the Azure IoT Hub. How to configure a SQL Azure database into which Stream Analytics inserts the BCI data for analysis and learning will be discussed in Part 2 of this article.

The Parallel Between Coding and the Brain

I don’t believe I’m the only person who’s made a connection between the construct of code structures and human characteristics. It seems in many ways that the building of code platforms was designed using our own human traits, because the ability to define ourselves in code seems to work so well that it flows, almost without thought. Consider the object-oriented programming term inheritance, where a child class receives a set of attributes and characteristics from a parent class. In the human context, children receive attributes from their parents, like eye color and hair color. Additionally, the ability to walk, blink and smell are examples of methodical characteristics that humans usually possess, as did my parents. However, those characteristics didn‘t come directly from my parents, they were inherited through many generations, starting from the base Human class itself.

If you were to create a Human class, you’d likely do so by including all the fundamental human attributes and characteristics within that class, like gender, eat, sleep, breathe and so on. Then, you’d create a Parent class inherited from the Human class, with some additional unique or more advanced characteristics such as reflection, speak, love, and so on, assuming, or not, that each generation of the inherited class becomes more sophisticated and complex over time. Inheritance progressively continues into the implementation of a more current Child class.

The way humans speak and communicate changes with each generation, which is where another programming concept called polymorphism arises. Polymorphism means that although the parent characteristic has the same name, purpose, and intent of the child, it can be performed in a different way and with more inputs so that the outcome is more precise. For example, although the parent has the capacity to speak, the child can have a similar speak method that additionally includes the ability to converse in multiple languages. The additional parameter into the speak method would be language type. This input would not be present in the speak method of the parent. The derived or overloaded speak characteristic could also include some enhanced communication capabilities like facial expression or tone inflection.

Creating these structured classes, the sophisticated methods and the unique set of attributes is a fascinating journey into the learnings of our internal self and existence. Constructing and defining ourselves is the best way to learn what makes us who we are. There is one thing, however, that is quickly realized after the model is built, which is: how to trigger the methods so the Child can actually do something. Instantiating the class is no big deal (Child child = new Child()), but what is the engine that then calls the methods and uses the attributes? Without the engine, all that exists is a motionless and thoughtless entity. While the human engine uses senses like sight, smell and touch to trigger an appropriate method, a computer engine uses data and coded logic to interpret the input as the basis for an action. In order to write that coded logic correctly we would first need a complete understanding of how humans work, which we do not. The missing piece is the brain.

Storing the Brain Waves

In order to store the brain waves collected from the BCI, there are numerous required components. From an individual perspective, you could conceivably make a simple ADO.NET connection to a local SQL database and that’s it; however, if you want to let many people with many devices use the application, using an Azure IoT Hub is the best way to go because of its reliability and scalability. The following three components are required to successfully upload and store brain waves:

  1. An Azure IoT Hub
    1. A device identity
    2. Code to upload the brain wave
  2. SQL Azure instance and data table
  3. Stream Analytics interface

I’ll now discuss in more detail the creation of the Azure IoT Hub.

Create an Azure IoT Hub

An Azure IoT Hub is similar to a message queue in that it temporarily stores multiple rows of data with the expectation that another entity, like a reader or, in this case, a Stream Analytics job, is monitoring the queue and taking an action once the message arrives. The benefit of Azure IoT Hub is that it’s extremely resilient and can scale to a very large size in a short amount of time. While testing this solution, I inserted about three rows per second and the client-side record count matched the server-side count exactly. Three events per second is very small; the Azure IoT Hub can handle millions of events per second.

To create an Azure IoT Hub, you need an Azure subscription and access to the Azure Portal at bit.ly/2bA4vAn. Click on the + New menu item and navigate to Internet of Things and select IoT Hub. Enter in the required data and press the Create button. It’s possible to have only one Free tier IoT Hub per subscription. The Free tier supports 8,000 events per day. Free tier is the one I picked for this project; however, if you need to insert more events, then choose the appropriate tier. Once created, view the details, as shown in Figure 5.

The Details Page of the BCI Azure IoT Hub
Figure 5 The Details Page of the BCI Azure IoT Hub

Once the Azure IoT Hub is created, the next step is to create a unique device identity required for connecting and uploading data to the Azure IoT Hub. The downloadable source contains a console project called BrainComputerInterface-CreateIdentity, which performs this activity. To create your own project, start by creating an empty console application in Visual Studio. Once created, right-click on the project and select Manage NuGet Packages, then search and add the Microsoft.Azure.Devices package with the provided example code; version 1.0.11 is used.

Before starting to code the creation of the device entity, access the Azure IoT Hub and get the connection string by selecting Shared access policies from the Settings blade. Next, select the appropriate Policy explained in the table in Figure 6. Selecting one of the policies shown in the table opens a blade that displays the Permissions and Shared access keys. Copy the Connection string -primary key and use it to set the value of connectionString shown in Figure 7.

Figure 6 Connection String Policies, Permissions and Usages

Policy Permission Usage
Iothubowner Registry Read/Write, Service/Device Connect Administration
Service Service Connect Sending and receiving on the cloud-side endpoints
Device Device Connect Sending and receiving on the device-side endpoints
RegistryRead Registry Read Read access to the identity registry
RegistryReadWrite Registry Read/Write Read/Write access to the identity registry

Figure 7 Create a Device Key for Each Unique Device ID

static RegistryManager _registryManager;
static string _connectionString = "the iothubowner connection string";
static void Main(string[] args)
{
_registryManager = RegistryManager.CreateFromConnectionString(_connectionString);
AddDeviceAsync().Wait();
ReadLine();
}
private static async Task AddDeviceAsync()
{
string deviceId = "ADD UNIQUE DEVICE ID";
Device device;
try
{
  device = await _registryManager.AddDeviceAsync(new Device(deviceId));
}
catch (DeviceAlreadyExistsException)
{
  device = await _registryManager.GetDeviceAsync(deviceId);
}
WriteLine($"Generated device key: {device.Authentication.SymmetricKey.PrimaryKey}");
}

To create a device identity, you need a connection string for a policy that has write permission to the identity registry. This means you would use either the iothubowner or registryReadWrite policy. It is highly recommended to use policies with the least amount of permissions required to perform the desired task; this reduces the chance of unintended actions such as global deletions or updates. Protect the iothubowner connection string parameters and provide it only when the creation of device identities or other administrative activities are required.

View the sample code shown in Figure 7. As this is a simple program, both the _connectionString and Microsoft.Azure.Devices.RegistryManager _registryManager are created as static class variables; it’s also fine to create them in the Main method and pass them as method parameters if desired. Instantiate the _registryManager variable by calling CreateFromConnectionStringMethod, then call the Program.AddDeviceAsync method asynchronously.

The Program.AddDeviceAsync method calls the Microsoft.Azure.De­vices.RegistryManager.AddDeviceAsync method, passing a new Microsoft.Azure.Devices.RegistryManager.Device. If an identity doesn’t already exist, it’s created; otherwise, the Microsoft.Azure.Devices.Common.Exceptions.DeviceAlreadyExistsException is thrown. The exception is handled because the code runs within a try{} catch{} code block. Within the catch{} block the Microsoft.Azure.Devices.RegistryManager.GetDeviceAsyncmethod is called and, in both cases, whether the Add or Get methods were called, the device key is rendered to the console.

Once the code is complete and compiles, execute the code and note down the device key, as it’s needed to create the DeviceClient class that contains the logic to connect and send data to the Azure IoT Hub used in the next section. Also, look again at Figure 5 and notice that the Devices link is initially grayed out. After a device is created, the Devices link on the Azure IoT Hub blade is enabled; clicking on it lets you disable/enable the device and retrieve the device key, just in case you missed it in the console window when created.

The code to capture the brain waves has already been written in the previous section. What needs to happen now is that instead of writing the BCI output to the console, write it to the Azure IoT Hub that was just created. In the sample code, there’s a project called BrainComputerInterface where the while{} loop discussed previously in Figure 2 is changed to call a new method SendBrainMeasurementToAzureAsync, as shown in Figure 8, which sends the BCI data to the Azure IoT Hub, instead of dumping the brain computer interface reading to the console.

Figure 8 Inserting the Brain Wave into Azure IoT Hub

while (true)
{
for (int i = 0; i < 5; i++)
{
  engine.IEE_GetAverageBandPowers(0, channelList[i],
    theta, alpha, low_beta, high_beta, gamma);
  SendBrainMeasurementToAzureAsync(channelList[i].ToString(), theta[0].ToString(),
    alpha[0].ToString(), low_beta[0].ToString(),
    high_beta[0].ToString(), gamma[0].ToString());
  }
}
private static async void SendBrainMeasurementToAzureAsync(string channel,
  string theta, string alpha, string lowbeta, string highbeta,
  string gamma)
{
  // ...
  try
  {
   var brainActivity = new
    { ManufacturerId, HardwareId, ActivityId, ChannelId,
      DeviceId, UserName, MeasurementDateTime, theta,
      alpha, lowbeta, highbeta, gamma };
  var messageString = JsonConvert.SerializeObject(brainActivity);
  var message = new Message(Encoding.ASCII.GetBytes(messageString));
  await deviceClient.SendEventAsync(message);
catch (Exception ex)
{ //...}
}

Notice that the SendBrainMeasurementToAzureAsync method uses the Microsoft.Azure.Devices.Client.DeviceClient, as mentioned earlier, and Newtonsoft.Json classes to format the data and add the BCI reading to the cloud. If creating a new project, add these two NuGet packages by right-clicking on the project and selecting Manage NuGet Packages.

Now that the code for writing the BCI output to the Azure IoT Hub is complete, you can place the BCI on your head and start the upload. When the BrainComputerInterface program starts running, it will ask you to select the scenario in which the brain waves are to be stored. Some examples of those are smelling a flower, being in the sun, hearing a firecracker and so on. Select the scenario, validate that the electrodes/contacts are green (see Figure 3) and once the power and sensor modules are ready, the brain waves will start being captured and uploaded to the cloud.

Note that at this point, you would see the Usage meter on the IoT blade change as data is being sent (see Figure 5), however, the data would be deleted after about 24 hours as there is, at this point, no database to store the data nor a program to move the messages from the Azure IoT Hub to a permanent storage location. In Part 2, a SQL Azure database is created, followed by the Stream Analytics job, so you can then analyze the data and discover new things.

Wrapping Up

The path this article series should ultimately lead you toward is two-fold. The first is from a cognitive perspective where the more you learn about yourself and how your brain works, the more you can begin to replicate or enhance it to improve your quality of life. Machines are better and faster at completing mathematical computations and they can draw from a much broader knowledge base for decision making, without emotion, than which the human brain is capable. If you can somehow integrate this into your own cognitive being, using some kind of artificial intelligence, then your ability to work faster and more precisely becomes greater.

The other concept is the ability to use thoughts to control items in your day-to-day life. As the proficiency to capture and analyze brain waves increases, the ability to use them with confidence also increases. Once one or more thought processes like push, pull or spin are flawlessly defined, they can be used to control objects or perform activities like changing the television channel or radio channel. It may even be possible to capture a reading and take an action before your own will recognizes that you have a desire to do so. The possibilities are endless.


Benjamin Perkins is an escalation engineer at Microsoft and author of four books on C#, IIS, NHibernate and Microsoft Azure. He recently completed coauthoring “Beginning C# 6 Programming with Visual Studio 2015” (Wrox). Reach him at benperk@microsoft.com.

Thanks to the following Microsoft technical expert for reviewing this article: Sebastian Dau
Sebastian Dau is an Embedded Escalation Engineer on the Azure IaaS team


Discuss this article in the MSDN Magazine forum