SCROLL DOWN
29 Aug 2019
AWS GREENGRASS,IOT AT THE EDGE
Hashedin
Rohan Bhowmick
#Latest Blogs | 8 min read
AWS GREENGRASS,IOT AT THE EDGE
#Latest Blogs
Hashedin
Rohan Bhowmick

 

WHAT IS EDGE COMPUTING?

 

Edge computing is a distributed, open IT architecture that features on decentralizing processing power, enabling mobile computing, and enhancing Internet of Things (IoT) technologies. With regards to edge computing, data is processed by the device itself or by a local computer or server, rather than being transmitted to a data centre.

 

THE IMPORTANCE OF EDGE COMPUTING

 

Edge computing enables data-stream acceleration, which includes real-time data processing without latency. It allows smart applications and devices to respond to data almost instantaneously, as its being created, and in the long run eliminate lag time. This is critical for technologies such as self-driving cars and has equally important benefits for businesses.

 

Edge computing enables efficient data processing in large amounts of data which can be processed near the source and helps in reducing Internet bandwidth usage. These two essential features eliminate expenditure of costs and ensures that applications can be used effectively in remote locations. In addition, the ability to process data without ever putting it into a public cloud adds a useful layer of security for sensitive data.

 

AWS GREENGRASS – EDGE COMPUTING FOR IOT

 

WHAT IS AWS GREENGRASS?

 

AWS IoT Greengrass seamlessly extends AWS to edge devices which enables them to act locally on the data they generate, while still using the cloud for management, analytics, and durable storage. With AWS IoT Greengrass, connected devices can run AWS Lambda functions, execute predictions based on machine learning models, keep device data in sync, and communicate with other devices securely – even when not connected to the Internet.

 

HOW DOES AWS GREENGRASS HELP IN EDGE COMPUTING?

AWS IoT Greengrass devices can act locally on the data they generate so they can respond quickly to local events, while still using the cloud for management, analytics, and durable storage. The local resource access feature allows AWS Lambda functions deployed on AWS IoT Greengrass Core devices to use local device resources like cameras, serial ports, or GPUs so that device applications can quickly access and process local data. AWS IoT Greengrass lets connected devices operate even with intermittent connectivity to the cloud. Once the device reconnects, AWS IoT Greengrass synchronizes the data on the device with AWS IoT Core, providing seamless functionality regardless of connectivity.

 

 A CASE STUDY

 

PROBLEM STATEMENT

 

There are many healthcare companies that focus on IOT and data analytics on the cloud. Medical scanned images are uploaded on the cloud which is then passed through trained machine learning models for analyzing the health condition of the patient. The issues with using IOT devices on the cloud is the continuous availability of network connection which can lead to enormous power consumption as well as huge computational cost.

 

 SOLUTION

 

AWS Greengrass helps to reduce the time around by deploying the entire pipeline on the local device such as an ec2 instance. Images are uploaded on the ec2 instance through any storage or streaming medium like Redis publish-subscribe model. Moreover, such application demands for a fast response of inference which is not feasible in the cloud. AWS Greengrass supports locally deployed ML models and local lambda which can access the local resources. So, whenever an image is uploaded locally, it can pass through the locally deployed ML models for inference and if required sync with the cloud for further training. Further, the inferred data can be sent to other AWS services for further processing through the local lambda.

 

ARCHITECTURE DIAGRAM

 

 FOR OUR USE CASE WE WILL BE USING THE FOLLOWING SERVICES: 

  • AWS IOT
  • AWS Greengrass
  • AWS Sagemaker
  • AWS Lambda
  • S3 Bucket

 

TECHNICAL OVERVIEW OF THE COMPONENTS USED

 

Greengrass Core – Greengrass Core is a software that is installed on the EC2 instance. This software extends cloud capabilities to local devices. In addition, it also enables devices to collect and analyse data closer to the source of information, react autonomously to local events, and communicate securely with each other on local networks.

 

Local Lambda – These are the local functions which are deployed on the EC2 instance, which can access both local and ML resources and can be invoked on a local event. This function zipped with Greengrass core SDK is sent to AWS IOT via MQTT Communication.

 

ML Inference – Greengrass allows ML Models to be deployed on the EC2 instance as a ML resource and can be accessed by the Local Lambda to perform ML Inference and send the data back to the cloud. For the above scenario, we train a tensor flow model for image classification and store it in S3 bucket which is then uploaded as a ML resource for the Greengrass device.

 

Local Resources – These are the resources which act as a storage or a source of data in the EC2 instance. For the above use case, we use Redis publish-subscribe model to stream the images. The local lambda function will listen to the streaming topics. Once it receives an image it will predict using the trained model.

 

Machine Learning Resources – These are the resources which exist in the Lambda runtime. It acts as a storage medium for trained ML models. For the above use case, we deploy TensorFlow model for classification as ML resource.

 

DETAILED OVERVIEW OF THE COMPONENTS

 

GREENGRASS CORE

 

It is the software that is installed on the local device. The AWS IoT Greengrass Core software provides the following functionality:

  • Deployment and local execution of connectors and Lambda functions.
  • Secure, encrypted storage of local secrets and controlled access by connectors and Lambda functions.
  • MQTT messaging between AWS IoT and devices, connectors, and Lambda functions using managed subscriptions.
  • Secure connections between devices and the cloud using device authentication and authorization.
  • Local shadow synchronization of devices. Shadows can be configured to sync with the cloud.
  • Controlled access to local devices and volume resources.
  • Deployment of cloud-trained machine learning models for running local inferences.
  • Automatic IP address detection that enables devices to discover the Greengrass core device.
  • Central deployment of new or updated group configuration. After the configuration data is downloaded, the core device is restarted automatically.
  • Secure, over-the-air software updates of user-defined Lambda functions.

 

LOCAL LAMBDA

 

AWS Greengrass supports local lambda which gets deployed on the Greengrass core. Local lambda can access the local resources and can get invoked on local events. It can then send data to other AWS services such as S3 or DynamoDB etc. It is zipped with Greengrass SDK and uploaded as a deployment package so that it can send data to AWS IOT via MQTT messaging. It can also access ML resources which contains cloud trained ML models.

 

 

RESOURCES IN GREENGRASS

 

  1. Local Resources – There are two types of local resources

 

  1. Volume resource – These are the type of resources which acts as storage in the device. We can upload data to these resources which can invoke the lambda function. Files or directories on the root file system (except under /sys, /dev, or /var) can be used as a volume resource.
  2. Device resource – These are the resources which act as a source of data. For example, on a device such as Raspberry Pi, device resource can be a camera module. Only character devices or block devices under /dev are allowed for device resources.
  3. Machine Learning Resource – These resources are stored in the lambda runtime. ML resources stores cloud trained models which are taken from S3 buckets or from AWS Sagemaker training jobs. It is then accessed by the local lambda functions on the device.

 

AWS GREENGRASS ML INFERENCE

 

AWS Greengrass supports Amazon Sagemaker to train ML Models which are later deployed to a Greengrass device. Amazon Sagemaker, trains models and deploys it in the form of training jobs which can be accessed by the ML resources of the Greengrass. ML resources of the Greengrass can also access the S3 buckets where different trained models can be uploaded which then gets deployed on local devices. Local lambda then accesses the ML resource containing the cloud trained models and predicts data generated locally. This reduces the turnaround time. It can sync with the cloud whenever required for further training of models.

 

If your use case involves data analytics on cloud and you need to process and fetch data with very less latency, AWS Greengrass is one of the effective solutions.

 

 

 

                       

 

 

 

 


Read similar blogs

Need Tech Advice?
Contact our cloud experts

Need Tech Advice?
Contact our cloud experts

Contact Us