Sripathi Krishnan
#Django | 6 Min Read
In this article post, we will learn how to implement social login in a Django application by using Social auth app Django which happens to be one of the best alternatives of the deprecated library python social auth
What is Social-Auth-App-Django?
Social auth app Django is an open source python module that is available on PyPI. This is maintained under the Python Social Auth organization. Initially, they started off with a universal python module called Python social auth which could be used in any Python-based web framework. However, at the moment they have deprecated this library. Going further, the code base was split into several modules in order to offer better support for python web frameworks. The core logic of social authentication resides in the social core module, which is then used in other modules like – social auth app django, social app flask, etc.
Installation Process
Let us now create an app called authentication. This app will contain all the views/urls related to Django social authentication.
python manage.py startapp authentication
You will have to make sure that you have created a Django project and then install social-auth-app-django from PyPI.
pip install social-auth-app-django
Then, add ‘social_django’ to your installed apps.
INSTALLED_APPS = [ ... 'social_django', 'authentication', ]
Now, migrate the database:
python manage.py migrate
Now, go to http://localhost:8000/admin and you should see three models under SOCIAL_DJANGO app
Configure Authentication Backend
Now, to use social-auth-app-django’s social login, we will have to define our authentication backends in the setting.py file. Note that although we want to give our users a way to sign in through Google, we also aim to retain the normal login functionality where the username and password can be used to login to our application. In order to do that, configure the AUTHENTICATION_BACKENDS this way:
AUTHENTICATION_BACKENDS = ( 'social_core.backends.google.GoogleOAuth2', 'django.contrib.auth.backends.ModelBackend', )
Note that we have django.constrib.auth.backends.ModelBackend still in our AUTHENTICATION_BACKENDS. This will let us retain the usual login flow.
Template and Template Context Processors
To keep it simple, we will have a plain HTML file which has a link to redirect the user for google+ login.
Go ahead and add a templates’ folder to the authentication app and create a new index.html file in it. In this html file, we will just be putting a simple anchor tag, which will redirect to social-auth-app-django’s login URL
Google+
In authentication/views.py render this template
We will now have to add context processors, which provide social auth related data to template context:
TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': ['templates'], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', 'social_django.context_processors.backends', # this 'social_django.context_processors.login_redirect', # and this ], }, }, ]
URL Configurations
So, now we have our template setup and we just need to add our index view and the Django social auth URLs in our project.
Go to the urls.py and add this code:
urlpatterns = [ url(r'^admin/', admin.site.urls), url('^api/v1/', include('social_django.urls', namespace='social')), url('', views.index), ]
Obtaining Google OAuth2 Credentials
To obtain the Google OAuth2 credentials, Follow these steps:
  1. Go to Google developer console
  2. You now have to create a new project. In case you do not have any (see the drop-down at the top left of your screen)
  3. Once the project is created you will be redirected to a dashboard. You will have to click on ENABLE API button
  4. Select Google+ APIs from the Social APIs section
  5. On the next page, click the Enable button to enable Google+ API for your project
  6. To generate credentials, select the Credentials tab from the left navigation menu and hit the ‘create credentials’ button
  7. From the drop-down, select OAuth client ID.
  8. On the next screen, select the application type (web application in this case) and enter your authorized javascript origins and redirect uri
For now, just set the origin as http://localhost:8000 and the redirect URI as http://localhost:8000/complete/google-oauth2/
Hit the create button and copy the client ID and client secret
Configuration of OAuth Credentials
Now, got to setting.py and add you OAuth creadentials like this:
SOCIAL_AUTH_GOOGLE_OAUTH2_KEY = SOCIAL_AUTH_GOOGLE_OAUTH2_SECRET =
Finally, define the LOGIN_REDIRECT_URL
LOGIN_REDIRECT_URL = '//'
For setting up credentials for other OAuth2 backend, the pattern is like this:
SOCIAL_AUTH__KEY = SOCIAL_AUTH__SECRET =
Usage
Now that the configuration is done, we can start off using the social login in the Django application. Go to http://localhost:8000 and click the Google+ anchor tag.
It should redirect to Google OAuth screen where it asks you to sign in. Once you sign in through your Gmail account, you should be redirected to the redirect URL.
To check if a user has been created, go to the django admin (http://localhost:8000/admin) and click on the User social auths table. You should see a new user there with the provider as goole-oauth2 and Uid as your gmail id.
Triggering The OAuth URLs from Non-Django Templates
There are chances that you want to trigger the social login URLs from a template that is not going to pass through Django’s template context processors. For instance, when you want to implement the login in an HTML template which is developed in using ReactJS. This is not as complicated as it looks. Just mention the href in your anchor tag like this:
Google+
The login URL for other OAuth providers is of the same pattern. If you are not sure about what the login redirect url should be, there are two ways to find the URL – either jump into their code base or try this hack –
  1. You can create a Django template and place an anchor tag with the respective URL as shown in the Templates and Template context processors section.
  2. You can remove the respective provider (in case of Google oauth2, remove social_core.backends.google.GoogleOAuth2 from the AUTHENTICATION_BACKENDS in settings.py
  3. You can click the anchor tag in a Django template. Doing so, you would see the Django’s regular error template saying “Backend not found”.
  4. Copy the current URL in the browser and use it in your non-Django template.
Summary
We learned what Social auth app Django is and that it uses social core behind the scenes. We also learned how to implement social login in Django through templates that are processed by Django and through non-Django templates, which could be developed using JS frameworks like ReactJS.
Sripathi Krishnan
#Business | 5 Min Read

At HashedIn, we commonly deploy Django based applications on AWS Elastic Beanstalk. While EB is great, it does have some edge cases. Here is a list of things you should be aware of if you are deploying a Django application.

Aside: If you are starting a new project meant for the elastic beanstalk, Django Project Template can simplify the configuration.

Gotcha #1: Auto Scaling Group health check doesn’t work as you’d think

Elastic Beanstalk lets you configure a health check URL. This health check URL is used by the elastic load balancer to decide if instances are healthy. But, the auto scale group does not use this health check.

So if an instance health check fails for some reason – elastic load balancer will mark it as unhealthy and remove it from the load balancer. However, auto scale group still considers the instance to be healthy and doesn’t relaunch the instance.

Elastic Beanstalk keeps it this way to give you the chance to ssh into the machine to find out what is wrong. If auto scaling group terminates the machine immediately, you won’t have that option.

The fix is to configure autoscale group to use elastic load balancer based health check. Adding the following to a config file under .ebextensions will solve the problem.


Resources:
   AWSEBAutoScalingGroup:
     Type: "AWS::AutoScaling::AutoScalingGroup"
     Properties:
       HealthCheckType: "ELB"
       HealthCheckGracePeriod: "600"

Credits: EB Deployer Tips and Trick

Gotcha #2: Custom logs don’t work with Elastic Beanstalk

By default, the wsgi account doesn’t have write access to the current working directory, and so your log files won’t work. According to Beanstalk’s documentation, the trick is to write the log files under the /opt/python/log directory However, this doesn’t always work as expected. When Django creates the log file in that directory, the log file is owned by root – and hence Django cannot write to that file.

The trick is to run a small script as part of .ebextensions to fix this. Add the following content in .ebextensions/logging.config:


commands:
  01_change_permissions:
    command: chmod g+s /opt/python/log
  02_change_owner:
    command: chown root:wsgi /opt/python/log

With this change, you can now write your custom log files to this directory. As a bonus, when you fetch logs using elastic beanstalk console or the eb tool, your custom log files will also be downloaded.

Gotcha #3: Elastic load balancer health check does not set host header

Django ’s settingALLOWED_HOSTS requires you to whitelist host names that will be allowed. The problem is, elastic load balancer health check does not set hostnames when it makes requests. It instead connects directly to the private IP address of your instance, and therefore the HOST header is the private IP address of your instance.

There are several not-so-optimal solutions to the problem

Terminate health check on apache – for example, by setting the health check URL to a static file served from apache. The problem with this approach is that if Django isn’t working, health check will not report a failure

Use TCP/IP based health check – this just checks if port 80 is up. This has the same problem – if Django doesn’t work, health check will not report a failure

Set ALLOWED_HOSTS = [‘*’] – This disables Host checks altogether, opening up security issues. Also, if you mess up DNS, you can very easily send QA traffic to production.

A slightly better solution is to detect the internal IP address of the server and add it to ALLOWED_HOSTS at startup. Doing this reliably is a bit involved though. Here is a handy script that works assuming your EB environment is Linux:

def is_ec2_linux():
    """Detect if we are running on an EC2 Linux Instance
       See http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/identify_ec2_instances.html
    """
    if os.path.isfile("/sys/hypervisor/uuid"):
        with open("/sys/hypervisor/uuid") as f:
            uuid = f.read()
            return uuid.startswith("ec2")
    return False
def get_linux_ec2_private_ip():
    """Get the private IP Address of the machine if running on an EC2 linux server"""
    import urllib2
    if not is_ec2_linux():
        return None
    try:
        response = urllib2.urlopen('http://169.254.169.254/latest/meta-data/local-ipv4')
        return response.read()
    except:
        return None
    finally:
        if response:
    response.close()
# ElasticBeanstalk healthcheck sends requests with host header = internal ip
# So we detect if we are in elastic beanstalk,
# and add the instances private ip address
private_ip = get_linux_ec2_private_ip()
if private_ip:
    ALLOWED_HOSTS.append(private_ip)

Depending on your situation, this may be more work than you care about – in which case you can simply set ALLOWED_HOSTS to [‘*’].

Gotcha #4: Apache Server on EB isn’t configured for performance

For performance reasons, you want text files to be compressed, usually using gzip. Internally, elastic beanstalk for python uses Apache web server, but it is not configured to gzip content.

This is easily fixed by adding yet another config file.

Also, if you are versioning your static files, you may want to set strong cache headers. By default, Apache doesn’t set these headers. This configuration file sets Cache-Control headers to any static file that has version number embedded.

Gotcha #5: Elastic Beanstalk will delete RDS database when you terminate your environment

For this reason, always create an RDS instance independently. Set the database parameters as an environment variable. In your Django settings, use dj_database_url to parse the database credentials from the environment variable.

Harish Thyagarajan
#Analytics | 2 Min Read
JinjaSQL is our new open source library to generate SQL using a Jinja template.
Why should you use JinjaSQL?
At HashedIn, Django is used extensively. The Django ORM is great for most of the use cases, however, there are times when you just need to write a raw SQL query and bypass the ORM altogether. The most common use cases are reports and listing pages that need complex joins.
When you hit a 5% use case that requires the expressiveness and power of raw SQL query, it is unlikely that your query is a simple one-liner.
For those use cases, JinjaSQL helps you maintain the queries in an external template file. You can put in placeholder variables, add if/else conditions, use macros and all the power that is available to a regular Jinja template. You don’t have to manually track your bind parameters. It tracks them and binds them appropriately.
Prevention of the SQL Injection
As a matter of fact templates are not a new idea, however, they haven’t been popular because they are vulnerable to SQL Injection. JinjaSQL never inserts values directly into the query. Instead, it gives you the generated SQL query, and a list of bind parameters. It is then up to you to use them to execute the query. Take a look at an example:
SELECT username, sum(spend)
FROM transactions
WHERE start_date >
AND end_date <
If this template is executed using plain-old Jinja2, you’d get :
SELECT username, sum(spend)
FROM transactions
WHERE start_date > '2016-01-01'
AND end_date < '2016-12-31'
With JinjaSQL, you get back two things,
SELECT username, sum(spend)
FROM transactions
WHERE start_date > %s
AND end_date < %s
and a list of bind parameters:
['2016-01-01' ,'2016-12-31']
The list of bind parameters can be integers, strings, or for that matter even python datetime objects.
Give JinjaSQL a try!
Try using it in your projects. In case you have any questions/comments – create an issue in GitHub.
Sripathi Krishnan
#Business | 6 Min Read
RabbitMQ is a messaging broker, and is an excellent choice to maintain task Queues. Here is how you can configure RabbitMQ on AWS in an autoscaling load balanced environment.
Installing RabbitMQ On Ubuntu/Debian
# Add the official rabbitmq source to your apt-get sources.list
sudo sh -c "echo 'deb http://www.rabbitmq.com/debian/ testing main' >
/etc/apt/sources.list.d/rabbitmq.list";
# Install the certificate
wget http://www.rabbitmq.com/rabbitmq-signing-key-public.asc
sudo apt-key add rabbitmq-signing-key-public.asc
rm rabbitmq-signing-key-public.asc
# Now install the latest rabbitmq
sudo apt-get update
sudo apt-get install rabbitmq-server
The above script will download the latest version of the RabbitMQ site and install it. Here are the references for the Installation instructions.
Installing RabbitMQ on RPM-based Linux :
rpm –import https://www.rabbitmq.com/rabbitmq-signing-key-public.asc
yum install rabbitmq-server-3.5.4-1.noarch.rpm
Install RabbitMQ-Management Plugin:
RabbitMQ management plugin is a Web UI for administration purposes. You can use it to view the queues, queue elements and a number of consumers. The management plugin is included in the RabbitMQ distribution. To enable it, use the below command.
sudo rabbitmq-plugins enable rabbitmq_management
Create an AMI of this EC2 Instance:
Once you have installed RabbitMQ and configured it, create an AMI out of it. Note the AMI ID. So that, you can use it later in the scripts to automatically scale out or scale in RabbitMQ.
Configuring RabbitMQ Cluster on AWS:
Now that we have our AMI, we will create a cluster on AWS. A cluster provides fault tolerance and load balancing.
Step 1: Create Security Groups RabbitMQ requires several ports to work. Some ports are needed for inter-node communication, others are needed between clients and RabbitMQ, and a third bucket is the HTTP based management interface.
RabbitMQ uses the following ports:
  1. 4369 for epmd
  2. 5672 and 5671
  3. 25672
  4. 15672
If you are deploying this within a VPC (which you should) – then these ports will be open to traffic from other nodes in the VPC. If you are deploying this on an EC2 classic, you will have to create a Security group that allows instances to communicate over these ports.
Step 2: Create a Load Balancer
  1. Create a new classic load balancer
  2. Set the protocol to TCP
  3. Forward port 5672 and 15672 to the instances over TCP/IP
  4. Assign security group that you created in the first step.
  5. In Health Check, ping port 5672
  6. Don’t add any instances yet, because we haven’t launched any

Step 3: Create a Launch Configuration
  • Choose the AMI that we created earlier
  • Expand Advanced Details, Copy paste the python script in “UserData” from the section below
  • Note: Replace the _url value with the DNS Name of the load balancer that was created in Step 1. To get the DNS name, go to load balancer page. Then select the load balancer that is created. Below this you can see the DNS name.
  • Click next until you reach the security group window. Then, select an existing security group and select the security group that is created in step 1.
  • Step 4: Create an auto-scaling Group
    1. Check “Create an Auto Scaling group from an existing launch configuration”.
    2. Choose the launch configuration that was created from the previous step.
    3. Expand Advanced Details, In Load Balancing, check to Receive traffic from Elastic Load Balancer(s). Then choose the Load balancer created in step 2.
    4. In the scaling policies, RabbitMQ machines should be scaled based on Memory Utilization.
    5. Remove 1 instances when memory utilization <= 30 for 300 seconds. Add 1 instances when memory utilization >= 70 for 60 seconds.
    6. With this configuration, you should see two instances come up in the instances page.
    RabbitMQ Launch Configuration Script
    This configuration, with go into the Launch Configuration as “User Data”.
    Note: The variable ‘_url’ should be updated with the load balancer URL.
    #!/usr/bin/env python
    import json
    import urllib2,base64
    if __name__ == "__main__":
     prefix = ''
     from subprocess import call
     call(["rm", "-rf", "/var/tmp/aws-mon"])
     call(["rabbitmqctl", "add_vhost", "/admin"])
     call(["rabbitmqctl", "add_user", "admin", "admin"])
     call(["rabbitmqctl", "set_user_tags", "admin", "administrator"])
     call(["rabbitmqctl", "set_permissions", "-p", "/admin", "admin", ".*", ".*", ".*"])
     call(["rabbitmqctl", "set_policy", "-p", "/admin", "server-qa-cluster", ".*?", '
    {"ha-mode":"all","ha-sync-mode":"automatic"}'])
     call(["rabbitmqctl", "stop_app"])
     try:
     _url = 'http://ELB-rabbitmq-QA.us-east-1.elb.amazonaws.com:15672/api/nodes'
     print prefix + 'Get json info from ..' + _url
     request = urllib2.Request(_url)
     base64string = base64.encodestring('%s:%s' % ('admin', 'admin')).replace('\n', '')
     request.add_header("Authorization", "Basic %s" % base64string)
     data = json.load(urllib2.urlopen(request))
     print prefix + 'request ok… finding for running node'
     for r in data:
     if r.get('running'):
     print prefix + 'found running node to bind..'
     print prefix + 'node name: '+ r.get('name') +'- running:' + str(r.get('running'))
     from subprocess import call
     call(["rabbitmqctl", "join_cluster",r.get('name')])
     break;
     pass
     except Exception, e:
     print prefix + 'error during add node'
     finally:
     from subprocess import call
     call(["rabbitmqctl", "start_app"])
    
    The above code will dynamically add the upcoming instances to the cluster based on the ELB provided in the code as the URL.
    What do the call methods mean?
    By default, RabbitMQ stays in the reset state. We need to create users and set up the permission for the user as the administrator for future logins. The commands to set up are as follows. The below code block creates User “Admin” with Password “Admin” – change it at your ease.
    call(["rabbitmqctl", "add_vhost", "/admin"])
    call(["rabbitmqctl", "add_user", "admin", "admin"])
    call(["rabbitmqctl", "set_user_tags", "admin", "administrator"])
    call(["rabbitmqctl", "set_permissions", "-p", "/admin", "admin", ".*", ".*", ".*"])
    
    Once the slaves’ nodes are set up, we need to add a policy to replicate/synchronize the elements of the slave nodes. It can be done using the command in the parent node. Also, it will be applicable across all other nodes in the cluster.
    call(["rabbitmqctl", "set_policy", "-p", "/admin", "server-qa-cluster", ".*?",
    '{"ha-mode":"all","ha-sync-mode":"automatic"}'])
    
    You will have to stop the service and fire a command requesting it to join one of the already running rabbitMQ services on other machines. The cluster between both the machines is automatically created when the ‘join_cluster’ request is fired.
    # Stop the RabbitMQ application. As it is running as a stand-alone queue service
    sudo rabbitmqctl stop_app
    # Start RabbitMQ by requesting to create the cluster with the 2-machine
    sudo rabbitmqctl join_cluster rabbit@ –ram
    # Start the RabbitMQ application. As it is running as a stand-alone queue service
    sudo rabbitmqctl start_app
    
    To verify, go to the admin page and http://:15672 you will notice 2 nodes, stating that the cluster has been created.
    Naureen Razi
    #Django | 5 Min Read
    Role Based Access Control provides restrictive access to the users in a system based on their role in the organization. This feature makes the application access refined and secure.
    Django provides authentication and authorization features out of the box. You can use these features to build a Role Based Access Control. Django user authentication has built-in models like User, Group, and Permission. This enables a granular access control required by the application.
    Here Are Some Build-In Models
    User Objects: Core of Authentication
    This model stores the actual users in the system. It has basic fields like username, password, and email. You can extend this class to add more attributes that your application needs. Django user authentication handles the authentication through session and middlewares.
    With every request, Django hooks a request object. Using this, you can get the details of the logged in user through request.user. To achieve role-based access control, use the request.user to authorize user requests to access the information.
    Groups: Way of Categorizing Users
    These are logical groups of users as required by the system. You can assign permissions and users to these groups. Django provides a basic view in the admin to create these groups and manage the permissions.
    The group denotes the “role” of the user in the system. As an “admin”, you may belong to a group called “admin”. As a “support staff”, you would belong to a group called “support”.
    Permission: Granular Access Control
    The defined groups control access based on the permissions assigned to each group. Django allows you to add, edit, and change permissions for each model by default.
    You can use these permissions in the admin view or in your application. For example, if you have a model like ‘Blog’.
    Class Blog(models.Model):
     pub_date = models.DateField()
     headline = models.CharField(max_length=200)
     content = models.TextField()
     author = models.ForeignKey(User)
    
    Each of these models is registered as ContentType in Django. All of the permissions that Django creates underclass Permission will have a reference to this specific ContentType. In this case, the following permissions will be created by default:

    add_blog: Any User or group that has this permission can add a new blog. change_blog: Any user or group that has this permission can edit the blog. delete_blog: Any user or group that has this permission can delete a blog.

    Adding Custom Permissions
    Django default permissions are pretty basic. It may not always meet the requirements of your application. Django allows you to add custom permissions and use them as you need. With a model meta attribute, you can add new permissions:
    Class Blog(models.Model):
      …
      Class Meta:
         permissions = (
               ("view_blog", "Can view the blog"),
               ("can_publish_blog", "Can publish a blog"),
         )
    
    These extra permissions are created along with default permissions when you run manage.py, migrate.
    How To Use These Permissions
    You can assign permission to a user or a group. For instance, you can give all the permissions to the group “admin”. You can ensure that the “support” group gets only the “change_blog” permission. This way only an admin user can add or delete a blog. You need Role-Based Access control for this kind of permission in your views, templates or APIs.
    Views:
    To check permissions in a view, use has a _perm method or a decorator. The User object provides the method has_perm(perm, obj=None), where perm is “.” and returns True if the user has the permission.
    user.has_perm(‘blog.can_publish_blog”)
    You can also use a decorator ‘permission_required(perm, login_url=None, raise_exception=False)’. This decorator also takes permission in the form “.”. Additionally, it also takes a login_url which can be used to pass the URL to your login/error page. If the user does not have the required permission, then he will be redirected to this URL.
    from Django.contrib.auth.decorators import permission_required
    @permission_required(‘blog.can_publish_blog’, login_url=’/signin/’) def publish_blog(request): …
    If you have a class-based view then you can use “PermissionRequiredMixin”. You can pass one or many permissions to the permission_required parameter.
    from Django.contrib.auth.mixins import PermissionRequiredMixin
    class PublishBlog(PermissionRequiredMixin, View): permission_required =blog.can_publish_blog’
    [/et_pb_text][et_pb_text admin_label=”Templates:” _builder_version=”3.0.86″ background_layout=”light”]
    Templates:
    The permissions of the logged in user are stored in the template variable. So you can check for a particular permission within an app.
    Level of Access Control: Middleware Vs Decorator
    A common question that plagues developers:“Where do I put my permission check, decorator or middleware?”. To answer this question, you just need to know if your access control rule is global or specific to a few views.
    For instance, imagine you have two access control rules. The first states that all the users accessing the application must have “view_blog” permission. As per the second rule, only a user having a permission “can_publish_blog” can publish a blog.
    In the second case, you can add the check as a decorator to the”publish_blog” view.
    In case of former, you will put that check in your middleware as it applies to all your URLs. A user who does not have this permission cannot enter your application. Even if you add it as a decorator, the views are safe.
    But adding the decorator to every view of your application would be cumbersome. If a new developer forgets to add the decorator to a view he wrote, the rule gets violated.
    Therefore, you should check any permission that’s valid globally in the middleware.
    Summary
    Thus, you can use Django user authentication to achieve complete role-based access control for your application. Go ahead and make the most of these features to make your software more secure.
    Naureen Razi
    #Django | 5 Min Read
    Role Based Access Control provides restrictive access to the users in a system based on their role in the organization. This feature makes the application access refined and secure. Django provides authentication and authorization features out of the box. You can use these features to build a Role Based Access Control. Django user authentication has built-in models like User, Group, and Permission. This enables a granular access control required by the application.
    Here Are Some Build-In Models
    User Objects: Core of Authentication
    This model stores the actual users in the system. It has basic fields like username, password, and email. You can extend this class to add more attributes that your application needs. Django user authentication handles the authentication through session and middlewares. With every request, Django hooks a request object. Using this, you can get the details of the logged in user through request.user. To achieve role-based access control, use the request.user to authorize user requests to access the information.
    Groups: Way of Categorizing Users
    These are logical groups of users as required by the system. You can assign permissions and users to these groups. Django provides a basic view in the admin to create these groups and manage the permissions. The group denotes the “role” of the user in the system. As an “admin”, you may belong to a group called “admin”. As a “support staff”, you would belong to a group called “support”.
    Permission: Granular Access Control
    The defined groups control access based on the permissions assigned to each group. Django allows you to add, edit, and change permissions for each model by default. You can use these permissions in the admin view or in your application. For example, if you have a model like ‘Blog’.

    Class Blog(models.Model):
     pub_date = models.DateField()
     headline = models.CharField(max_length=200)
     content = models.TextField()
     author = models.ForeignKey(User)
    

    Each of these models is registered as ContentType in Django. All of the permissions that Django creates underclass Permission will have a reference to this specific ContentType. In this case, the following permissions will be created by default: add_blog: Any User or group that has this permission can add a new blog. change_blog: Any user or group that has this permission can edit the blog. delete_blog: Any user or group that has this permission can delete a blog.

    Adding Custom Permissions
    Django default permissions are pretty basic. It may not always meet the requirements of your application. Django allows you to add custom permissions and use them as you need. With a model meta attribute, you can add new permissions:

    Class Blog(models.Model):
      …
      Class Meta:
         permissions = (
               ("view_blog", "Can view the blog"),
               ("can_publish_blog", "Can publish a blog"),
         )
    

    These extra permissions are created along with default permissions when you run manage.py, migrate.

    How To Use These Permissions
    You can assign permission to a user or a group. For instance, you can give all the permissions to the group “admin”. You can ensure that the “support” group gets only the “change_blog” permission. This way only an admin user can add or delete a blog. You need Role-Based Access control for this kind of permission in your views, templates or APIs.
    Views:
    To check permissions in a view, use has a _perm method or a decorator. The User object provides the method has_perm(perm, obj=None), where perm is “.” and returns True if the user has the permission. user.has_perm(‘blog.can_publish_blog”) You can also use a decorator ‘permission_required(perm, login_url=None, raise_exception=False)’. This decorator also takes permission in the form “.”. Additionally, it also takes a login_url which can be used to pass the URL to your login/error page. If the user does not have the required permission, then he will be redirected to this URL. from Django.contrib.auth.decorators import permission_required @permission_required(‘blog.can_publish_blog’, login_url=’/signin/’) def publish_blog(request): … If you have a class-based view then you can use “PermissionRequiredMixin”. You can pass one or many permissions to the permission_required parameter. from Django.contrib.auth.mixins import PermissionRequiredMixin class PublishBlog(PermissionRequiredMixin, View): permission_required =blog.can_publish_blog’ [/et_pb_text][et_pb_text admin_label=”Templates:” _builder_version=”3.0.86″ background_layout=”light”]
    Templates:
    The permissions of the logged in user are stored in the template variable. So you can check for a particular permission within an app.
    Level of Access Control: Middleware Vs Decorator
    A common question that plagues developers:“Where do I put my permission check, decorator or middleware?”. To answer this question, you just need to know if your access control rule is global or specific to a few views. For instance, imagine you have two access control rules. The first states that all the users accessing the application must have “view_blog” permission. As per the second rule, only a user having a permission “can_publish_blog” can publish a blog. In the second case, you can add the check as a decorator to the”publish_blog” view. In case of former, you will put that check in your middleware as it applies to all your URLs. A user who does not have this permission cannot enter your application. Even if you add it as a decorator, the views are safe. But adding the decorator to every view of your application would be cumbersome. If a new developer forgets to add the decorator to a view he wrote, the rule gets violated. Therefore, you should check any permission that’s valid globally in the middleware.
    Summary
    Thus, you can use Django user authentication to achieve complete role-based access control for your application. Go ahead and make the most of these features to make your software more secure.
    Sripathi Krishnan
    #Technology | 4 Min Read

    Authenticating REST APIs calls for selecting the right one that suits your application. There are several ways:

    Use Case 1. API for Single Page Application

    There are two choices for Single Page Applications:

    • Session Based
    • Token-Based authentication

    The set of questions that need to be asked are:

    1. Should the sessions be invalidated before they expire? If Yes, Sessions must be preferred.
    2. Should the session end based on inactivity as against ending after a fixed time? If Yes, Sessions must be preferred.
    3. Should the me functionality be remembered? If Yes, Sessions must be preferred./li>
    4. Will mobile applications to use the same APIs? If yes, prefer token-based authentication (but ensure a separate API is built for these use cases)
    5. Is Your web framework protected against CSRF? Prefer token based authentication if it is a “No” or if you don’t know what CSRF is.

    If token-based authentication is preferred, avoid JSON Web Tokens. JWT should be used as a short, one-time token, as against something that is reused multiple times. Alternatively, you could create a random token, store it in Redis/Memcached, and validate it on every request.

    Use Case 2. API for Mobile Apps

    Prefer token-based authentication. Mobile apps do not automatically maintain and send session cookies. Hence it would be far easier for a mobile app developer to set an authentication token as opposed to setting a session cookie.
    Signature-based mechanisms too aren’t useful as secret keys cannot be embedded in a mobile application. Only when you have another channel, say a QR code, to pass secrets, you could use signature-based mechanisms.
    A random token stored in Redis or Memcached is the most ideal approach. Ensure you use a cryptographically secure random generator in your programming.

    Use Case 3. Building a Service that will be Used by Server Side Code Only

    The three choices for APIs that are called from a server-side application includes:

    • Basic authentication over https
    • Token-based authentication
    • Signature-based authentication
    1. Will the API be used only internally and only by the applications you control, are the API themselves of low value? If yes, as long as you are using HTTPS, basic authentication is acceptable.
    2. Would you want the keys to be long-lived? If yes, since the keys themselves are never sent over the wire, signature-based authentication.
    3. Would you want the same APIs to be used for both mobile and web apps? If yes, since signature based auth requires clients to store secrets, prefer token-based authentication.
    4. Are you using OAuth? If yes, the decision is made for you. OAuth 1 uses signature-based authentication, whereas OAuth 2 uses token-based authentication.
    5. Do you provide a client library to access your APIs? If yes, prefer signature based auth, because you can then write the cryptography code once and provide it to all your clients.

    Use Case 4. Using JSON Web Tokens

    JWT works best for single use tokens. Ideally, a new JWT must be generated for each use.
    Acceptable use cases:

    1. Server-to-server API calls, where the client can store a shared secret and generate a new JWT for each API call.
    2. Generate links that expire shortly – such as those used for email verification
    3. As a way for one system to provide a logged in user limited access to another system.

    Use Case 5. OAuth for APIs

    OAuth should be used, only if the below criteria are met:

    1. The Consumers or the user-specific data is exposed by your API
    2. Some of the user-specific data is being accessed by third-party developers that you don’t trust
    3. If You want to ask your users if they want to share their data with the third party developer

    If the answer is “yes” to ALL the 3 questions, you certainly require OAuth.

    Use Case 6. Compare all the Different Authentication Techniques

    The Authentication Techniques for APIs Document has pros and cons for each approach.

    Pranav J.Dev
    #Technology | 8 Min Read
    This article will explain how we can run a service on Android 6 and above without getting killed by the OS. From Marshmallow onwards, Android has introduced two new feature called Doze Mode and Standby Mode. This will give some more life to your phone i.e. extend its battery.
    Doze Mode
    “Doze mode is exactly like its name suggests. If the user leaves the device unplugged for a period of time and the screen is off, then the device will enter into Doze Mode. In Doze mode, the system attempts to conserve battery by restricting apps from accessing the network and intensive CPU utilization services. It also prevents the app from accessing the networks and defer their jobs, syncs, and alarms.
    Standby Mode
    The system will enter into standby mode if the user is not actively using the app for a period of time and also there is no process for the app which is currently in the foreground(either as an activity or foreground service). In both Doze and Standby modes, the system will prevent apps from accessing the network and CPU Utilization. Once we connect the device to the charger then the system will exit from these two modes. Android will allow 10 secs of time for all the app to run their tasks in Doze mode, which is called maintenance window. At the end of each maintenance window, the system will enter into Doze mode again. This process will continue until the device is connected to a charger or the user is interacting with the application. In the process to save the battery the maintenance window won’t happen as frequently as one would like. The first window occurs an hour after last activity, the next after two, the next after four and so on. As the phone enters Doze mode, it brings about a few changes which are not in favour of the app that needs any one of the below
    • Network Access will be suspended
    • Wake locks will be ignored
    • Stops the sync adapters
    • Job Schedules will not be allowed
    How to solve Doze Mode
    Now Doze mode helps us in many ways but it comes at a cost; it renders a major part of few the application useless. Now we need to figure out a way around it. Here we will see how can we create an application and will send some pieces of information to the backend API in every 2 minutes regardless of whether the app is running or not. There are some actions that the Android OS will do to extend battery life, which in turn means Android from time to time needs to kill some processes to reclaim the memory for more important processes. A process with high memory usage is likely to be killed by Android when it is backgrounded. So think about it if you are keeping your background component and UI in the same process, then it will use more memory and it is more likely to get killed when the app goes in background. Now, we know what happens next; all the components (Service, Content Provider, Activities, Broadcast Receiver) are killed when Android kills the process. We have identified the issue, the solution presents itself i.e. decouple the background components from the UI; it is better to keep them in a different process.
    How to Do it
    To run an Android component in the different process, you have to specify the process name while you are defining that component in the manifest. By default, all the components will run in the same process. We can create a service in a different process by mentioning different process name. By default, if you want to run a service in the background forever, then you can return START_STICKY in the onStartCommand of service. The service will get recreated by the Android OS if this was destroyed for many reasons. But this will not work in devices which are running in Android version 6 and above due to Standby and Doze mode restrictions
    @Override
    public int onStartCommand(Intent intent, int flags, int startId) {
    super.onStartCommand(intent, flags, startId);
    return START_STICKY;
    }
    
    These battery optimizations will take place automatically if the user is not interacting with the phone for some amount of time and a charger is not connected to the device, or if the screen is off. To get rid of these modes we have to create a foreground service which will save our process from StandBy mode, because the foreground service is giving an option for the user to interact with the service (Music player is an example for foreground service, which is running in the background but still the user can switch between the songs, play, pause and other operations can be performed )
    Other Options
    In Android Doze mode documentation they specify the use of FirebaseJobDispatcher, which will work even in Doze mode. But this will work only once in every 15 minutes. If we want to do some API calls in less than 15 minutes interval then FirebaseJobDispatcher is not a good option. As of my knowledge, AlarmManager is the only option to do this. As the name suggests this can be used for setting some alarms and which will get invoked at the desired interval and we can do some operations when the alarm fires. But the default function which we were using in older versions setExact() will not work in Android Versions above 6. Android has introduced two new methods from Marshmellow onwards, setAndAllowWhileIdle(), setExactAndAllowWhileIdle()) which will fire even in Doze mode (more than once per 9 minutes per app). But if our apps requirement is to call some API in every 1 or 2 minutes then we should swift off the doze mode.
    Switch off Doze Mode
    As you are aware by now there are no settings to disable Doze mode in Android. There are some ways to turn off the doze mode, those required some user interactions as well as connecting charger to the device or interacting with the app continuously. Switching the screen ON for a while is not a difficult task but at any time the user can Switch OFF the screen by pressing the power key in Android device. Hence few steps need to be taken so that our service will continue even after the user turns off the screen by pressing the hardware button. For that lets walks through the steps mentioned below
    Step 1 Create a foreground service with different process name.
    <service android:name=".backgroundmanager.BackgroundOperationsManagerService" android:enabled="true" android:process="com.hashedin.processname" />

    Step 2 Create an alarm which will run every 30 secs, by checking the Android version
    AlarmManager alarmManager = (AlarmManager)getSystemService(Context.ALARM_SERVICE);
    Intent intent = new Intent(this, AlarmBroadCastReceiver.class);
    PendingIntent pendingIntent = PendingIntent.getBroadcast(this,
    ALARM_REQUEST_CODE, intent, PendingIntent.FLAG_UPDATE_CURRENT);
    if (Build.VERSION.SDK_INT >= 23)
    {
    alarmManager.setExactAndAllowWhileIdle(AlarmManager.RTC_WAKEUP, 30000, pendingIntent);
    }
    else if (Build.VERSION.SDK_INT >= 19)
    {
    alarmManager.setExact(AlarmManager.RTC_WAKEUP, 30000, pendingIntent);
    }
    else
    {
    alarmManager.set(AlarmManager.RTC_WAKEUP, 30000, pendingIntent);
    }
    

    Step 3 Create a broadcast receiver for catching the Alarm, and it should be in the same process where the foreground service is running
    <receiver android:name=".backgroundmanager.alarmmanager.AlarmBroadCastReceiver"
    android:enabled="true"
    android:process="com.hashedin.processname"/>
    This Broadcast receiver should extend from WakeFulBroadCastReceiver class AlarmBroadCastReceiver extends WakefulBroadcastReceiver
    @Override
    public void onReceive(Context context, Intent intent)
    {
    if (screenWakeLock == null)
    {
    PowerManager pm = (PowerManager) context.getSystemService(Context.POWER_SERVICE);
    screenWakeLock = pm.newWakeLock(PowerManager.SCREEN_DIM_WAKE_LOCK | PowerManager.ACQUIRE_CAUSES_WAKEUP,
     "ScreenLock tag from AlarmListener");
    screenWakeLock.acquire();
    }
    Intent service = new Intent(context, WakefulService.class);
    startWakefulService(context, service);
    if (screenWakeLock != null)
    screenWakeLock.release();
    }
    
    This will request a partial wakelock and start a service which is declared under the same process. This intent service will then start the foreground service again and which will do the desired operation as the device is safe from Doze and StandBy modes.
    Step 4 (Optional) Create a firebasejobdispatcher which will be scheduled for every 15 minutes, in case the device goes to idle mode (this is an exceptional case) private void schedule
    FirebasePeriodicTask() {
    FirebaseJobDispatcher dispatcher = new FirebaseJobDispatcher(new GooglePlayDriver(this));
    Job periodicJob = dispatcher.newJobBuilder()
    // the JobService that will be called
    .setService(PeriodicFirebaseJobService.class)
    // uniquely identifies the job
    .setTag(PERIODIC_TASK_TAG)
    // Recurring job
    .setRecurring(true)
    // persist past a device reboot
    .setLifetime(Lifetime.FOREVER)
    // start between 0 and 120 seconds from now
    .setTrigger(Trigger.executionWindow(0, 900))
    // don't overwrite an existing job with the same tag
    .setReplaceCurrent(true)
    // retry with exponential backoff
    .setRetryStrategy(RetryStrategy.DEFAULT_EXPONENTIAL)
    // constraints that need to be satisfied for the job to run
    .setConstraints(
    // only run on an unmetered network
    Constraint.DEVICE_IDLE
    )
    .build();
    dispatcher.mustSchedule(periodicJob);
    }
    
    Which will also start the service every 15 minutes if the device is idle so that we can assure that the service will not get killed for many reasons.
    Conclusion
    This article discussed how we can prevent the android app from entering in to doze mode. As there are no settings to enable or disable Doze mode, you can simply enjoy the extra battery life. Happy coding!
    Vaibhav Singh
    #Technology | 5 Min Read
    Salesforce provides a REST API for interacting with its platform. It is the most common way to integrate with the third party services/applications. Its advantages include ease of integration and development, and it’s an excellent choice of technology for use with mobile applications and Web 2.0 projects.
    The Salesforce REST API is best suited for browser or mobile apps which don’t need access to high amounts of records. In case you want to access high amounts of records you should probably explore Salesforce BULK API. Salesforce REST API supports JSON and XML.
    Step 1: Setting up OAuth 2.0
    Before we can access any Salesforce data we will have to authenticate ourselves using OAuth 2.0. But we will have to first enable OAuth 2.0 on our Salesforce account.
    1. Create a connected app in Salesforce
    2. Enter Apps in the Quick Find box, select Apps (under Build | Create), then click the name of the connected app.
    3. Enable OAuth settings and specify your callback URL and OAuth scopes.
    4. On clicking SAVE, a consumer key and consumer secret are generated.
    Salesforce supports the following OAuth flows:
    • Web server flow, where the server can securely protect the consumer secret.
    • User-agent flow, used by applications that cannot securely store the consumer secret.
    • Username password flow, where the application has direct access to user credentials.
    We will be using Username password flow to ease the integration but you can setup any flow based on your requirements.
    Step 2: Logging In
    To access data on Salesforce we need to authorize ourselves using an access token. Using the OAuth flow we will be generating the access token.
    The access token can be obtained by making a POST request to the appropriate endpoint such as https://login.salesforce.com/services/oauth2/token or https://test.salesforce.com/services/oauth2/token. The required parameters are:
    a. grant_type: The value should be ‘password’
    b. client_id: The Consumer Key from the connected app definition.
    c. client_secret: The Consumer Secret from the connected app definition.
    d. username: yourusername@domain.com end-user’s username.
    e. password: yourpasswordXXXXXXXXXX, we will need to generate a security token from our Salesforce account. For example, if a user’s password is yourpassword, and their security token is XXXXXXXXXX, then the value provided for this parameter must be yourpasswordXXXXXXXXXX.
    These parameters are passed as x-www-form-urlencoded. The request body will look something like this :
    grant_type=password&client_id=3MVG9lKcPoNINVBIPJjdw1J9LLM82Hn
    FVVX19KY1uA5mu0QqEWhqKpoW3svG3XHrXDiCQjK1mdgAvhCscA9GE&client_secret=
    1955279925675241571&username=testuser%40salesforce.com&password=yourpassword123456XXXXXXX
    
    Salesforce will verify the user credentials and if authenticated returns the following response with the access token.
    {"id":"salesforce_id",
    "issued_at":"1278448832702",
    "instance_url":"https://***yourInstance***.salesforce.com/",
    "signature":"0CmxinZirTD+zMpvIWYGb/bdJh6XfOH6EQ=",
    "access_token": "00Dx0000000BV7z"}
    
    We can use this access token to access the data on Salesforce.
    Step 3: Accessing Data
    Every HTTP method is used to indicate a specific action in Salesforce.
    1. HEAD is used to retrieve object/record metadata.
    2. GET is used to retrieve information about record/object.
    3. POST is used to create a new object.
    4. PATCH is used to update a record.
    5. DELETE is used to delete a record.
    There multiple ways we can access data on Salesforce, for every request we will have to pass the access token in the request header.
    1. Getting Salesforce version:
    URL: https://yourInstance.salesforce.com/services/data/
    method type: GET
    2. Getting List of Resources:
    URL: https://yourInstance.salesforce.com/services/data/{version}/
    method type: GET
    This method returns the list of resources available on the Salesforce version provided in the URL, example: v20.0
    3. Getting List of Objects:
    URL:https://yourInstance.salesforce.com/services/data/v20.0/{resource_name}
    method type: GET
    This provides us the available objects in the resource, (subjects) passed in the URL.
    4. Getting Object Metadata:
    URL:https://yourInstance.salesforce.com/services/data/v20.0/sobjects/{Object Label}
    method type: GET
    This provides us the metadata for the Object, like Account’s object here.
    5. Getting Record Data:
    URL:https://yourInstance.salesforce.com/services/data/v20.0/sobjects/Accounts/{Object ID}
    method type: GET
    This provides us the data of the objects based on the Id which we pass in the request.
    6. SOQL for Custom retrieval:
    Salesforce provides a an option to execute SOQL queries which are very similar to SQL queries to retrieve data.
    Example of SOQL:
    URL:https://yourInstance.salesforce.com/services/data/v20.0/query?q=Select+name+from+Account
    method type: GET
    Summary
    Salesforce’s REST APIs are pretty straightforward, easy to integrate and works with simple HTTP requests, but there are many open source packages which provide a wrapper around these HTTP requests and provide a simple interface for the developers. A few of them are:
    * Python: simple-salesforce
    * Node: node-salesforce
    * Ruby: restforce In case you are looking for Salesforce BULK APIs, check out the documentation here.
    Harshit Singhal
    #Business | 6 Min Read
    As Tim Brown (the CEO of IDEO) beautifully puts “Design thinking is a human-centered approach to innovation that draws from the designer’s toolkit to integrate the needs of people, the possibilities of technology, and the requirements for business success.”
    The importance of design has been constantly increasing over the years. The consumers of today’s generation have quick access to the global marketplaces. They do not distinguish between physical and digital experiences anymore.
    This has made it difficult for companies to make their products or services stand out from the rest of the competitors.
    Infusing your company with a design-driven culture that puts the customer first may not only provide real and measurable results but also give you a distinct competitive advantage.
    All the firms that have embraced a design-driven culture have certain things in common.
    First, these firms consider design more than a department. Firms that primarily focus on design tries to encourage all functions to focus more on their customers.
    By doing this, they are also conveying a message that design is not a single department, in fact, design experts are everywhere in an organization working in cross-functional teams and having constant customer interaction.
    Second, for such companies, the design is much more than a phase. Popular design-driven companies use both qualitative and quantitative research during the early product development phase, bringing together techniques such as ethnographic research and in-depth data analysis to clearly understand their customers’ needs.
    Why Design Thinking?
    Thinking like a designer can certainly transform the way companies develop products/services, strategies, and processes.
    If companies can bring together what is most desirable from a human point of view with what is technologically feasible and also economically viable, they can certainly transform their businesses.
    This also gives an opportunity to people who are not trained as designers to utilize creative tools to tackle a range of challenges.
    Design Thinking Approach
    Empathize: Understand your users as clearly as possible and empathize with them.
    Define: Clearly define the problem that needs to be sorted and bring out a lot of possible solutions.
    Ideate: Channel your focus on the final outcomes, not the present constraints.
    Prototype: Use prototypes for exploring possible solutions.
    Test: Test your solutions, reflect on the results, improvise the solution, and repeat the process.
    (Recently, we had Mr. Ashish Krishna (Head of Design – Prysm) addressing our interns about the same Design Thinking Approach )
    Facts that prove the importance of having a Design Thinking Approach
    Some years ago, Walmart had revamped its e-commerce experience, and as a result, the unique visitors to its website increased by a whopping 200%. Similarly, When BOA (Bank of America) undertook a user-centered design of its process for the account registrations, the online banking traffic shot up by 45%.
    In a design-driven culture, firms are not afraid to launch a product that is not totally perfect, which means, going to market with an MVP (minimally viable product), learn from the customer feedback, incorporate the same, and then build and release the next version of the product.
    A classic example of this is Instagram, which launched a product, learning which features were most popular, and then re-launching a new version. As a result, there were 100,000 downloads in less than a week.
    ROI from Design
    Let’s take a look at some examples of how design impacted the ROI of companies.
    The Nike – Swoosh, which is one of the most popular logos across the globe, managed to sell billions of dollars of merchandise through the years. The icon was designed in the year 1971 and at that time the cost was only $35. However, after almost 47 years, that $35 logo evolved into a brand, which Forbes recently estimated to be worth over $15 billion.
    Some years back the very popular ESPN.com received a lot of feedback from users for their cluttered and hard to navigate homepage. The company went ahead and redesigned their website, and as a result, the redesign garnered a 35% increase in their site revenues.
    Some Benefits of having a Design Thinking Approach
    Helps in tackling creative challenges: Design thinking gives you an opportunity to take a look at problems from a completely different perspective. The process of design thinking allows you to look at an existing issue in a company using creativity.
    The entire process will involve some serious brainstorming and the formulation of fresh ideas, which can expand the learner’s knowledge. By putting design thinking approach to use, professionals are able to collaborate with one another to get feedback, which thereby helps in creating an invaluable experience to end clients.
    Helps in effectively meeting client requirements: As design thinking involves prototyping, all the products at the MVP stage will go through multiple rounds of testing and customer feedback for assured quality.
    With a proper design thinking approach in place, you will most likely meet the client expectations as your clients are directly involved in the design and development process.
    Expand your knowledge with design thinking: The design process goes through multiple evaluations. The process does not stop even after the deliverable is complete.
    Companies continue to measure the results based on the feedback received and ensure that the customer is having the best experience using the product.
    By involving oneself in such a process, the design thinkers constantly improve their understanding of their customers, and as a result, they will be able to figure out certain aspects such as what tools should be used, how to close the weak gaps in the deliverable and so on.
    Conclusion
    If we take a closer look at a business, we will come to a realization that the lines between product/services and user environments are blurring. If companies can bring out an integrated customer experience, it will open up opportunities to build new businesses.
    Design thinking is not just a trend that will fade away in a month. It is definitely gaining some serious traction, not just in product companies, but also in other fields such as education and science.