SignalR Sample Application – Hello World

In my last post, I described the simplicity that SignalR brings to table when we want to include real time asynchronous communication between server and client(s) in our application. In this post I will make a simple application, the proverbial ‘Hello World’ kind of application to introduce the basic concepts of this new library.

The following are the pre-requisites for the application:

1. Visual Studio 2012

2. .NET Framework 4.5

3. ASP.NET and Web Tools 2012.2

The best thing about the web tools update 2012.2 is the included template that makes adding SignalR capabilities to your project amazingly easy. Our website will be based on ASP.NET Web forms and will use JavaScript client, which comes in the form of a jQuery plugin. So let’s start.

1. Fire up your Visual Studio and make a new empty ASP.NET Web forms web site.

2. Right click on the website and add a SignalR Hub class and provide “HelloWorld” as the name for the hub class. This will add the following items in the website:

a) App_Code folder – which will house our hub class and the App_Start folder which contains the RegisterHubs.cs file, housing the code that maps the hubs available at application startup.

b) Scripts folder which houses all the JavaScript files, including the bundled jQuery library and the JavaScript client based upon the jQuery library.

c) Bin folder housing all the necessary assemblies that co-ordinate things on the server side. This also contains the JSON serializer Json.NET.

d) packages.config which lists all the external dependencies.

Now we will have to change this setup a little bit in order to make it work in a web forms website. Remove the App_Start folder and the containing RegisterHubs.cs file from it. The App_Start folder is something that the MVC website can use to execute code at application startup. Web form website uses Global.asax to do just the same. I believe this a template error, and will be corrected out in future releases.

Delete App_Start folder and it's contents

 

 

 

 

 

 

 

 

 

 

 

 

3. Add a Global.asax file to the solution. The HTML portion of your Global.asax file should look like this:

<%@ Application Language="C#" Inherits="Global" CodeBehind="~/App_Code/Global.asax.cs"%>
 
<script runat="server">
</script>

 

and the code behind portion should look like this:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Routing;
using Microsoft.AspNet.SignalR;
 
/// <summary>
/// Summary description for Global
/// </summary>
public class Global:System.Web.HttpApplication
{
    void Application_Start(object sender, EventArgs e)
    {
        // Code that runs on application startup
        RouteTable.Routes.MapHubs();
    }
 

Explanation of code:

Note that we are required to include System.Web.Routing and Microsoft.Aspnet.SignalR namespaces and make the Global class inherit from System.Web.HttpApplication class.The purpose of having RouteTable.Routes.MapHubs is to map the hubs available to the client to a special default address: /signalr/hubs. If you want to configure the availability of hubs to a special address then you will have to provide custom address in the MapHubs as a string parameter. e.g. RouteTable.Routes.MapHubs(“~/signalr2”).

4. Now head over to the hub class and add the following code:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using Microsoft.AspNet.SignalR.Hubs;
 
public class HelloWorld : Hub
{
    public void Hello()
    {
        Clients.All.hello();
    }        
}
 

 

Explanation of code:

The main thing to notice here is the inclusion of the namespace Microsoft.Aspnet.SignalR.Hubs and derivation of the hub class HelloWorld from parent class Hub. From the perspective of the client and server model, this is the class that aptly serves as a hub and will receive all the communication from the client(s) and further the same to other client(s) or take some action on the server side. In code, we are declaring that there is a server side method called “Hello” which when executed will execute a method called “hello”, available to all the clients.

5. Now add an HTML page and add the following code in it.

<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
    <title>Hello World! with SignalR</title>
    <script src="Scripts/jquery-1.7.1.min.js"></script>
    <script src="Scripts/jquery.signalR-1.0.0-alpha1.min.js"></script>
    <script src="/signalr/hubs" type="text/javascript"></script>
    <script type="text/javascript">
        $(document).ready(function () {
 
            //defining hub which will have client and server side methods defined.
            var hub = $.connection.helloWorld;
 
            //defining what the client side "hello" method should do when called.
            //by the server side code.
            hub.client.hello = function () {
                $('#AddText').append('Hello World!<br />');
            }
 
            //starting the hub and specifying what should server side code should be
            //called when a certain client side event occur.
            $.connection.hub.start().done(function () {
                $('#SubmitButton').click(function () {
                    hub.server.hello();
                });
            });
        });
    </script>
</head>
<body>
    <div id="AddText"></div>
    <input type="button" id="SubmitButton" value="Say Hello" />
</body>
</html>

 

Explanation of code:

Here we reference all the necessary JavaScript libraries among which are the jQuery library and SignalR JavaScript client based on jQuery. There is one special JavaScript reference “/signalr/hubs” which is the JavaScript that is dynamically generated at client side and referenced by DOM at runtime. The body of the HTML document consists of a div element bearing id “AddText” to which we will add text “Hello World!” each time the button “Say Hello” is pressed. The jQuery code for this example is very simple. In it we declare the name of the hub class. Note that we are using camel casing as per JavaScript coding conventions to refer to our server side hub class. Thus “helloWorld” at client side is “HelloWorld” at server side. We then define that the client side “hello” method which is actually adding the text “Hello World!” and a line break to the div element “AddText”. We then start the connection and when done, observes the click event of the “Say Hello” button. When the click event happens on the client side, we execute the server side method “Hello” of the hub class, which in turns executes the “hello” method available at all the clients. This adds text to the div element at all clients.

Speaking diagrammatically, this is what we are aiming to do:

Client-Server Model

 

 

 

 

 

 

 

 

 

 

 

 

 

 

In the end the code hierarchy should look like in the following image with all the code in the right place.

Code hierarchy

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

6. As the last step start debug and run the website in IIS Express. Open the website in multiple different browsers and press the “Say Hello” button. It will add text “Hello World!” in all the browsers simultaneously in real time.

Multiple browsers

 

 

 

 

 

 

 

 

 

 

 

Website running in IE9 and Firefox, receiving instructions from server in real time.

The code example presented here has been deliberately kept simple to prime the reader of the basic concepts of this new library.

In a nut shell we achieved:

1. Real time communication between server and client(s).

2. Auto negotiation of protocol (WebSocket, forever frames, long polling etc.) between the client(s) and server.

3. Simple, unified programming interface built upon C# (server side) and jQuery (client side in ASP.NET).

Download code:

SignalR sample application download

 

 

 

In the next post I will demonstrate a more involved example, utilizing jQuery UI. Stay Tuned.

Introduction to SignalR

SignalR is the newest member in the .NET ecosystem fully supported by Microsoft, offering an abstraction over the transport layer, that enables developers to make scalable applications that can have an real time asynchronous communication between server and client(s) and vice versa.

Key Takeaway:

SignalR is the newest member in the .NET ecosystem fully supported by Microsoft, offering an abstraction over the transport layer, that enables developers to make scalable applications that can have an real time asynchronous communication between server and client(s) and vice versa.

Read On

In the client server model, there are two modes of communication at play – push or pull. In pull paradigm, it is the client that pulls the information from the server. In push, it is the server that explicitly sends the information to the client. The web model is based on the pull technology, affectionately known as the request-response pattern. The client (browser) requests information from the server, and gets the response (web page, data etc.) Once this transaction is done the server merrily goes back into the state of amnesia. This pattern is the reason, why the web is stateless.

Whenever there is a requirement of data being explicitly pushed from the server, it poses a different set of problems than what the stateless nature of web is designed to solve. You then have to keep track of connections going to various clients all the while maintaining scalability and performance.

To achieve push from server this there are various kinds of hacks that are put in place, but all based on pull paradigm. The various sleights include opening a never ending connection to the server to polling the server again and again. There is even a standard term for such kinds of client-server interactions – COMET.

COMET operations fall under two kinds of categories –polling and streaming. Streaming is when the client opens up a connection to the server and keeps it open. Such kind of connection is called as a persistent connection and a characteristic of such a connection is the “Connection: keep-alive” command at the transport layer level. Persistent connection is predominantly used in web applications targeting real time data which needs to be server to a variety of versions of various browsers. One fine example is that of Google Finance. Notice that it uses persistent connection to bring in data continuously about the live condition of the financial market.

Persistent Connection 1

Google Finance uses persistent connection to update the webpage continuously.

Polling is the technique when the client keeps on sending requests to the server, demanding any new data that the server might have. A good example of polling is the Dropbox client application.

Polling 1

 

 

 

 

 

 

 

 

 

 

Dropbox client application in Windows 7 does polling.

Yet other trick involves using an iframe, which is fundamentally an HTML document embedded within a parent HTML document. This technique is primarily used to show ads dynamically within a web page and periodically pull a new one within the iframe.

iframe 1

Yahoo Mail’s new interface uses iframe to show advertisements.

The newest technology on the block is WebSocket protocol. WebSocket is part of HTML5 draft and is truly duplex and persistent form of connection to the server side processes. But sadly, not all the browsers and more importantly web servers support websocket protocol as of today. So if you are developing a public facing web site, you cannot truly rely on websocket yet.

So, all in all, right now we have a conundrum of technical tricks and an upcoming technology that is not uniformly supported by all client browsers and web servers, to solve the problem of serving real time data from server(s) to various client(s). Adding to the complexity is the fact that there are various kinds of clients available ranging from web browsers to sensors (that can use .NET Micro Framework) to native applications in iOS and Android ecosystems to desktop applications that require real time data pushed from a central server.

Meowpheus

 

 

 

 

 

 

 

 

 

 

 

This is where SignalR comes into play and provides a unified approach to handle the requirement of asynchronous real time duplex communication between client(s) and server.

The way SignalR works is that it auto negotiates the protocol between the client and the server based on the capabilities of the pair. So if they support websocket, then that is used. If not websocket, then SignalR falls back to server sent events, if not that then forever frames and so on. So the developer is not required to worry about the protocol detection and usage and eventual graceful degradation. SignalR handles this automatically, all this while providing a uniform API to program against. SignalR is being actively developed by Damian Edwards’s team at Microsoft and they have already released the first version in the fall update (2012) of ASP.NET. SignalR scales well and already supports SQL Server, Service Bus and key-value system like Redis. Client side API is available for .NET (4.0 and 4.5), HTML in the form of a jQuery client, WPF, Silverlight 5 and Windows Phone 8.

In the next post I will make a simple application using SignalR. In the meantime please learn more about this awesome technology from the following videos:

1. Damian Edwards and David Fowler presenting basics of SignalR (Sorry video is too big to fit in here.)

2. Keeping It Realtime Conference - SignalR - Realtime on ASP.NET (Scott Hanselman & Paul Batum)

 

 

Please head out to the following links to get more information:

1. SignalR.net

2. www.asp.net/signalr

3. Source code of SignalR at GitHub

Coming up in future

5. February 2013 14:24 by Parakh in General  //  Tags: , , , ,   //   Comments

My last post, although not very utilitarian was an exiting one for me, as it allowed me to share my thinking. This post continues the theme of my last post. I wanted to share some of the choicest videos that helps us see a glimpse of what future holds for us. See how research conducted at avant-garde institutes might completely change our lifestyle, our thinking and give and new insights into solving problems ranging from transportation mental disorders.

In the first video Eric Schmidt (Chairman, Google) discusses about how ideas that had been in incubation for a long time are now coming to fruition. This specially ties into my previous post on the how proliferation of cloud computing and bandwidth are intertwined.

Eric Schmidt of Google talks at Princeton about the future of technology

 

I am big fan of innovation coming out of Google. One of them is the project of Google Glasses. It is worth noting how they have captured the facets of life where such a device might be of help. Although I doubt people will just put it on, the moment they wake up.

Project Glass: One day...

 

In the similar tone, here is Microsoft’s 3D Holodeck

Microsoft's HoloDesk - Direct 3D Interactions with a Situated See-Through Display

 

Sebastian Thurn’s Stanley was the winner of the DARPA Grand Challenge 2005. That challenge has inspired both tech industries and auto industry to create driverless cars. Listen to him give a beautiful introduction to the concept.

Sebastian Thrun: Google's driverless car

 

Simplifying complex concepts is an art unto itself and one that takes a lot of deliberate practice. Listening to Juan Enriquez discuss the direction that biotech will take and its implications on the life is mesmerizing.

Juan Enriquez: Using biology to rethink the energy challenge

 

Juan Enriquez: Will our kids be a different species?

 

Juan Enriquez: The next species of human

 

While advancement in technology mostly yields very tangible results in the form of automation, the time spent upon dealing with various problems can also yield simple solutions and a completely new insight. This video shows how VS Ramachandran found new insights on curing the phenomenon of phantom limb.

VS Ramachandran: 3 clues to understanding your brain

These are just some of the videos that help us what future beholds for us and understand the power of science and engineering, and why STEM courses are important for the future of growth of a nation. Just want to sign off this post with this video of Sir Ken Robinson, emphasizing the changes in the education system that needs to be bought to help catalyze and realize the creative thinking.

Sir Ken Robinson: Do schools kill creativity?

 

I hope this post made you as exited as I was when I got to revisit these videos.

Cloud computing and what it means for internet pipes

27. January 2013 13:47 by Parakh in Cloud Computing, General  //  Tags: , , , , ,   //   Comments
Technology companies in order to promote their cloud platform and integrate in our daily lives, might have to take ownership of infrastructure supporting the internet and in the process might also leverage this endeavor to become a service provider.

Key takeaway:

Technology companies in order to promote their cloud platform and integrate in our daily lives, might have to take ownership of infrastructure supporting the internet and in the process might also leverage this endeavor to become a service provider.

Read on:

In the not so distant future, we will be travelling in self-driving cars that would adjust the cabin climate according to the ambient climate and habits of the user, play music from your home server or read news/business reports using a digital subscription, and after dropping you off at the office will go and pick your groceries based off of the list sent to your car by your refrigerator. Once you reach home, you will be greeted with the ambience according to how your schedule went during the day. You will automatically be reminded by your home about your favorite TV serials scheduled for the day and will automatically record them for later playback, in case you are unable to see them when scheduled.

 

Microsoft’s vision of productivity in future

 

 

A Day Made of Glass... Made possible by Corning.

 

You see, that a lot of what I mentioned is already happening around us in or form or the other, it is just that it is not happening in one cohesive form, one mass that can act seamlessly. But that will change in future. A lot of what I mentioned in the opening paragraph requires the following things:

1. Data about you – living style, requirements, frequency and kind of purchases etc.

2. Appliances that operate standard protocols

3. Bandwidth

The big data and analytics movement aims to solve the first requirement and if you look at the concept of shopping recommendations, to a certain extent, it does solve it, and with time and data it will get better. The second requirement of appliances that operate on standard protocols is also being worked upon. The third requirement of bandwidth is the focus of this post and is the most interesting to me at this moment. Let me present my perspective on where things are and where they are heading.

It has been a long time since the term “Cloud Computing” was first coined, and it materialized in the form of Amazon Web Services in 2002. Cloud computing has since then evolved into various kinds of services, the gist of all being that, the consumer of the service is not required to engage in managing IT infrastructure. The service can be in the form of providing an operating system as a service (Windows Azure), which can then become a foundation to run programs written in various programming language(s) supported, or it can be simple Customer Relationship Management (CRM) system (Salesforce). All these are offered as services that are managed by their parent companies, freeing the service consumer to focus of what they do best, and not invest capital and human resources into managing infrastructure.

Now if we take a step back and look at how principles of economics governed the spread of general computing, we will find that general computing first targeted the enterprise space to make them efficient and save cost, and then came down to the general masses. Similarly, cloud computing is currently targeting the enterprise space heavily, enticing them with the convenience and upfront capital cost savings the concept brings with it, but has also started to crawl into the consumer space. Look at DropBox, Google Drive, Microsoft SkyDrive; they are not enterprise storage solutions, but rather consumer facing cloud storage solutions. In the entertainment section look at iTunes, Amazon AutoRip, Google Play and Netflix; they are not enterprise solutions at all, but rather a flavor of cloud services geared towards consumers.

An interesting side effect, of cloud computing is that it can be extended to individual consumers and be used to gather data about their purchasing habits. Applications and services like iTunes, Google Play, Amazon, Netflix etc. take into account your choice of music and based on your past preferences suggest you potential songs, movies, books, goods, services etc.

Since these services rely on internet as the medium of conveyance, the bandwidth available between the offering and the consuming entities can become an issue. This becomes a bigger concern if the service provider hosts a public facing service that is heavy on bandwidth such as video, for example Netflix.

In such situations, the growth of the service provider depends upon meeting the demands of the service consumer, and ensuring that there are reliable and redundant pipes available. Given this, it would bode well, if they have a say in the upkeep of these internet conduits. A more desirable situation from the service provider’s perspective would be to own this essential piece of infrastructure, if they want to grow their cloud platform.

As far as I can see and evaluate, I already see that happening, albeit at a smaller pace. Companies like Google, I believe already have a strategy and have started acting on it in the primary markets. We can see that in the form of high speed fiber connectivity in Kansas City, Missouri. Recently, Google started offering free Wi-Fi in limited areas in New York and in the City of Mountain View. Fast connectivity means more viewing of high definition video, more usage of cloud storage, more video conferencing leading to an alternate source of income for cloud computing provider(s) and ultimately an end to end solution i.e. collection of data about habits leading to predictive analytics leading to automated lifestyle. So it starts with the ownership of internet connectivity in city areas and eventually might end up with transcontinental internet pipes, and in the process becoming a service provider.

It gels well with the philosophy and selling points of cloud computing which are redundant data backups at geographically dispersed locations and content distribution networks which serve content from the data center which is located nearest to the consumer. One more pointer in this direction is the purchase of Motorola Mobility by Google, and now having access to a cache of intellectual property in the form of telecom patents.

One of the primary reasons of this trend is that if you look at history, then it will be apparent that the telecom companies have not done much to push the limits of bandwidth on a pro-active basis. They just offered to the consumers whatever lowest common denominator they could come up with that proved profitable to them. Before the advent of cloud computing, there was in fact no stake of the tech companies either. But all that is changing. More mobile devices, proliferation of video content, video communication, in-app purchases, cloud storage etc. require more and more bandwidth. That’s too much of a responsibility to be left with Ma Bell, especially when they are not getting a cut in the pie. Self-driving cars, home automation, intelligent thermostats, refrigerators etc., all eventually leading to an internet of things, would require a whole lot of bandwidth and redundancy. Retina screens, HD TVs, 4K TVs needs loads of bandwidth in order to render the depth and richness that they have been designed for. Thus, the future depends on how efficiently we carry data to and fro between the serving and consuming points, and that requires discarding off copper conduits and embracing optical and high speed wireless technologies with new standard protocols that work efficiently, precisely what pioneer companies like Google are quietly working on.

See more:

1. The future according to Google's Larry Page

2. The Internet of Things

3. Eight business technology trends to watch

Using DropBox to back up your Source Control repository

15. January 2013 14:28 by Parakh in Cloud Storage, DropBox, Source Control, SVN  //  Tags: , , , ,   //   Comments
Let cloud storage take care of automatically backing up your source control repository.

In organizations of every size, code is generally managed with the help of source control. It is great for keeping a versioned history and doing branched modifications. The code repository is generally kept in a regularly backed up environment, having someone caring for it with all their wit and skills. But the same level of service and peace of mind is not available to the same set of developers working on their weekend projects on their personal computers. Granted they can have the source control, but having a service regularly taking back-up of your repository can be hard; sometimes because of lack of time and/or storage media and at other times because backup is fragmented at different places and we do not have any idea of the whereabouts of the latest one. Cloud storage removes this obstacle and allows us to take care of automatic backups, in fact, instantaneous backups of the repository, whenever anything in repository is revised or added, without the hassle of handling any storage media. This has been made possible fundamentally by the fact that most of the cloud storage services provide a client application that can watch over a certain folder for any changed and/or new bits of information.

Here I will be covering on how to make DropBox’s Windows client application take care of a Tortoise SVN repository.

myImage

Figure 1 Conceptual diagram

Steps involved:

1. Install DropBox’s client application, appropriate for your operating system, with advanced parameters, specifying the name and location of the DropBox folder and choosing selective sync. If you have multiple accounts, make sure you provide the credentials of the account where you want a copy of the repository to be stored.

clip_image001

Figure 2 Advanced installation

 

clip_image002

Figure 3 Location of backed up folder

 

clip_image003

Figure 4 Using selective sync to better use your storage space

 

clip_image005

Figure 5 Installed directory

2. Install Tortoise SVN with default parameters.

3. Go to the folder that is being watched by the DropBox client service for any updates or additions and make a new folder in which you want the repository structure housed.

clip_image007

Figure 6 Create repository in DropBox folder

4. Right click on the folder and create a new repository there. Tortoise SVN will create a new repository there (there will be folders like conf, db etc.). You will be storing your code in this repository.

clip_image009

Figure 7 Create repository

 

clip_image011

Figure 8 Resultant repository structure

5. Once you create a new repository, DropBox application will immediately start synchronizing the backup folder with the one on its servers, replicating the entire repository structure.

6. Now navigate to the location to where you want to have your working copy of code. You have to create a working copy in which you will later be checking in your code and any new assets that you want added to your repository.

clip_image013

Figure 9 Check out from your repository

7. Finally you can add folders which can contain your projects. See the final result. (Create new folder –> Add to repository –> Commit to repository)

clip_image015

Figure 10 Example of directory structure of working copy

8. Once you do any updates to your repository from your working copy, only the bits that change or gets added will be uploaded to your DropBox profile.

It is important to understand that you have to make a backup of the repository structure itself (i.e. the weird structure containing folders such as conf, db, hooks etc.) and not the working copy, since you can always get the working copy from the repository.

Many thanks to my brother Priyanshu Agrawal for letting me use his computer for software installation and resultant screenshots.

Month List

ParakhSinghal@Twitter.com

Note: For Customization and Configuration, CheckOut Recent Tweets Documentation