Jun 26
Moving to ASP.NET Identity from ASP.NET Membership

​So I'm in the process of generally updating and spring-cleaning a web application I created in MVC 4. I've upgraded this to MVC 5 based on the instructions in this article. It's a pretty straight-forward process and results in a true MVC 5 web application. However, there will be a lot of older technology hanging around in your upgraded project. For example, the upgraded project will still use the ASP.NET Membership system for user and roles management. If you create an MVC 5 project from scratch, by contrast, you'll be using the new Identity system, which is built on the OWIN/Katana project.

In this article I'm going to explain how to move from Membership to Identity in such a situation. This will fill in the gap left by this MSDN article, which includes the phrase "We hope to soon provide guidance on migrating your existing apps that use ASP.NET Membership or Simple Membership to the new ASP.NET Identity system". Beat you to it!

There are a lot of advantages to using Identity instead of Membership. These include:

  • It's easy to store users and roles in the same database as your other data (e.g. products). Before, I usually maintained membership info in a separate database.
  • It's easy to add properties to a user account. What you do is define a model class that inherits from IdentityUser. This gives you all the standard properties. You can add any extra property you like, such as birthday, eye colour, favourite member of Blakes Seven, and so on.
  • Identity is claims-aware.

 Here's a overview of the steps:

  1. Install the required NuGet packages
  2. Create an OWIN Startup Class
  3. Configure OWIN Cookie Authentication
  4. Configure your Entity Framework DB Context to store Identity Information
  5. Rewrite Membership code as Identity Code
  6. Configure the Anti-Forgery Token
  7. Cleanup

Let's launch into those steps:

1. Install the Required NuGet Packages

As well as the Identity packages, you'll need to add the OWIN host package. Here are the commands to run in the package manager:

Install-Package Microsoft.AspNet.Identity.EntityFramework

This package will upgrade Entity Framework to version 6.1.0 and install the Microsoft.AspNet.Identity.Core package.

Install-Package Microsoft.AspNet.Identity.Owin

This one installs a whole bunch of OWIN packages. 

Install-Package Microsoft.Owin.Host.SystemWeb

This command installs a host that enables OWIN applications to run within the IIS ASP.NET pipeline.

2. Create the OWIN Startup Class

To configure OWIN you must add a new class to your application. This doesn't have to derive from anything else or implement any interface, however you have to mark it as the OWIN startup class. Do this by adding the following line of code, either to the class file itself, or to the AssemblyInfo.cs file where other assembly properties are configured:

[assembly: OwinStartup(

typeof(mynamespace.mystartupclass))] 

You can put this startup class anywhere in the project but the App_Start folder is the logical place.

Note: This doesn't seem to work with Cassini. When I tried this I got "This operation requires IIS integrated pipeline mode". Switching to IIS Express fixed it.

3.  Configure OWIN Cookie Authentication

We're going to use local cookie authentication, because that's the equivalent of forms authentication in ASP.NET membership. We need to tell OWIN what kind of authentication to use and also where the login page is.

You will already have a controller that handles login, logout and other membership functions. This controller doesn't need replacing - most logic in it will be the same but you'll need to change some lines of code in it (we'll cover those below). Right now we need to point OWIN to the login action on that controller.

We do these two things by adding a Configuration method to the OWIN startup class:

public void Configuration(IAppBuilder app)
{
   app.UseCookieAuthentication(

      new CookieAuthenticationOptions
      {
         AuthenticationType =

         DefaultAuthenticationTypes.ApplicationCookie,

         LoginPath = new PathString("Accounts/Login")
      });
}

Once the above is completed, restricted controller actions still go to the Forms Authentication login page. To stop this, you must remove the <authentication> tag from web.config. Then users will be forwarded to the PathString you configure for OWIN authentication.

4. Configure Your Entity Framework DB Context to Store Identity Infomation

If you want to store your user information in the same database as your other data, you must now make some changes to the Entity Framework code. In fact, it's pretty simple, you just need to make your EF DB Context class (often called the repository) inherit from IdentityDbContext instead of the usual DbContext.

public class MyRepository :

   IdentityDbContext<IdentityUser>, IMyRepository

{

   //data access code here.

}

Because I just want to create standard users, I've passed the IdentityUser class in the above code. If you want to add extra properties to a user account, such as preferences or profile information, define an extra model class that inherits IdentityUser:

public class MyUser : IdentityUser

{

   public string favouriteColour { get; set; }

}

Then change the repository class to this:

public class MyRepository :

   IdentityDbContext<MyUser>, IMyRepository

{

   //data access code here.

}

This means that the database schema has changed, so you need to propagate those changes to the actual database. In order to do that without destroying existing data, you should use EF code migrations. Assuming you've enabled those already, run the following commands to apply the schema change:

Add-Migration "IdentityInfo"
Update-Database

5.  Rewrite Membership Code As Identity Code

All the building blocks are now in place so our next step is to rewrite the code that logs users on, logs them off, registers their accounts and so on. These operations are a little more involved with Identity, because it's claims aware, but not much harder to understand. You can see some examples of these operations on ASP.NET. I'll show just one example: logging in. Here's a simplified Controller Action that uses Membership:

[HttpPost]

[ValidateAntiForgeryToken]

public ActionResult Login (Login login)

{

   if (ModelState.IsValid)

   {

      //We have credentials. Let's check them.

      if (Membership.ValidateUser(login.Username,

         login.Password))

      {

         //The credentials are correct.

         //Let's log the user in

         FormsAuthentication

            .SetAuthCookie(login.Username);

      }

   }

   return RedirectToAction("Index", "Home"); 

}

And here's the Identity version:

[HttpPost]

[ValidateAntiForgeryToken]

public ActionResult Login (Login login)

{

   if (ModelState.IsValid)

   {

      //We have credentials. Let's check them.

      UserStore<MyUser> userStore =

         new UserStore<MyUser>

         (new MyRepository());    

      UserManager<MyUser> userManager =

         new UserManager<MyUser> (userStore);

      MyUser user = userManager.Find

         (login.Username, login.Password);

 

      if (user != null)

      {

         //The credentials are correct.

         //Create some claims for the user

         List<Claim> claims = new List<Claim>();

         claims.Add(new

            Claim(ClaimTypes.NameIdentifier,

               login.Username));

         ClaimsIdentity userId = new ClaimsIdentity(

          claims,

          DefaultAuthenticationTypes.ApplicationCookie);

        

         //Let's log the user in

         var owinContext = Request.GetOwinContext();

         var authManager = owinContext.Authentication;

         authManager.SignIn(userId);

      }

   }

   return RedirectToAction("Index", "Home"); 

}

6. Sort Out the Anti-Forgery Token

You should be using anti-forgery tokens to prevent Cross-Site Request Forgery (CSRF) attacks whenever you accept user input. If you don't already know about this, you should, so start reading here. Unfortunately, moving to Identity Cookie Authentication confuses the anti-forgery token filter, so you need to do a little configuration to reassure it.

All this requires is the following line of code added to the Global.asax Application_Start() event handler:

AntiForgeryConfig.UniqueClaimTypeIdentifier =

   ClaimTypes.NameIdentifier;

If you forget to do this step, you'll get an error that complains about "A claim of type: X was not present on the provided claim identity". You can read more about this issue on Brock Allen's Blog.

7. Clean Up

Nearly done. The application should now run and you should be able to log on, log off, and complete all the other actions that you have recoded for Identity. Now is a good time to check that everything works if you haven't been checking as you were going along. Run all your unit and end-to-end tests to ensure you haven't broken anything as well.

Assuming that things check out OK, we should do some cleaning up to remove Membership and associated packages and classes from our application. Firstly, remove all using statements for the following namespaces:

  • System.Web.Security
  • System.Web.Providers

Secondly, remove the Universal Providers NuGet packages. Here's how to do that in the package manager console:

Uninstall-Package Microsoft.AspNet.Providers

Also, you will probably have some kind of accounts or membership repository class which you can now delete from your models. This is because the accounts are now with the application data in a single repository that stores everything. Similarly, you should find that you can delete a redundant connection string from web.config.

Summary

So that's an annoyingly long procedure, but it's quicker than starting your application from scratch, and you've gained a much improved way to store users and profile information in the same database as the rest of your data. You've also learnt a bit about Identity, hopefully.

Next, you might want to consider using Facebook, Microsoft, or other external authentication providers so that users don't have to create a new account for access to your site. Read about that in this ASP.NET article.

Hope this helps! 

 

Jun 12
Using Unity with Web API When You're Already Using it With MVC

So here's a situation I think many of you may find yourself in:

  • You've developed an ASP.NET MVC 4 web site.
  • You've used Entity Framework for data access.
  • You've done dependancy injection by using the Unity framework from the Microsoft Patterns and Practices team. This means you can easily write unit tests for your code by injecting mock objects instead of using real databases and so on.
  • Now you want to add an API that serves JSON and/or XML data for use by mobile apps and other clients.

The ASP.NET Web API is your best bet to build this API. You'll want to stick with the models that you built for the MVC version and stick with Unity for dependency injection. If you can do this, then the task of building the API reduces to building the Web API controllers and configuring the Web API Media Formatters to ensure that the API generates exactly the right JSON or XML.

This isn't a particularly difficult task and there are a lot of articles available to help for any programmer who can put 2 and 2 together. However, I think a lot of people will be in precisely this situation, so I thought it might help if I documented the necessary steps. It will also be very similar for those using another dependency injection (or Inversion of Control) framework, such as NinJect.

Step 1:  Install the Unity.WebAPI NuGet Package

So this is pretty straight forward. Use the NuGet Package Manager to locate the Unity.WebAPI package or use the following command in the package manager console:

install-package Unity.WebAPI

The main thing to watch out for here is that the package will attempt to install the UnityMvcActivator.cs file in the App_Start folder. In older versions of the package this was called bootstrapper.cs. This file is already present because the Unity.MVC package has installed it. When asked if you want to overwrite it, it's important to respond "No", so that you don't lose you MVC Unity configuration.

Step 2: Register Unity as the IoC Container

Web API has support for dependency injection, AKA inversion of control (IoC), which you can tell because the Web API's HttpConfiguration object includes the DependencyResolver property. But note that Web API doesn't have its own dependency resolver (AKA IoC Container). Instead you use this property to register Unity, NinJect or whatever you want as the dependency resolver to use. Here's how to do that:

var container = UnityConfig.GetConfiguredContainer();

GlobalConfiguration.Configuration.DependencyResolver =

   new Unity.WebApi.UnityDependencyResolver(container); 

In my projects I put this code in the UnityMvcActivator.cs file in the Start method. That is where the equivalent task for MVC is done so it's logical to me to put this code in the same place.

Step 3: Start Using IoC in Web API Controllers

 At this stage, you can start creating your Web API Controllers and use Unity to inject service or repository objects. Here's an abrieviated example:

public class ProductController : ApiController

{

   private IProductService service;

   //Use Unity to resolve the right service class

   public ProductController(IProductService serv)

   {

      service = serv;

   }

 

   // GET api/<controller>

   public IEnumerable<Product> Get()

   {

      IEnumerable<Product> products = service.AllProducts;

      return products;

   }

}

Great. Now you think you're done and you run the project. When you call http://localhost:port/api/products from Fiddler (or whatever client you prefer) it might work. Usually, however, you'll get an error returned and, when you look into it in Fiddler, you find an inner exception that talks about an infinite loop.

Step 4: Fixing the Infinite Loop Problem 

The infinite loop problem is actually pretty easy to fix and originates with the Web API media formatters. These are the things that take an object or collection (like the products collection in the above example) and turn that into JSON, XML or whatever serialized data you want to send to the client. I'm a big fan of the media formatters because they make it very, very easy to build your Web API without writing a lot of awkward text formatting code. But it's not surprising that they need some configuration to control the JSON or XML they generate.

The infinite loop arises from foreign key constraints in your database. For this example, imagine there is a one-to-many relationship between Manufacturers and Products. I.e. each manufacturer in you database can make multiple products. In your model classes, they'll be a reference from Manufacturer to Products and a reference from Products to Manufacturer. If you're using Entity Framework Code First, these will be navigation properties.

You just need to configure the media formatters to spot these infinite loops and stop at the first loop. I do this in the WebAPIConfig.cs file, in the Register method, after Web API routes have been set up. Here's the code for the JSON formatter:

JsonMediaTypeFormatter jsonFormatter

   = config.Formatters.JsonFormatter;

jsonFormatter.SerializerSettings.PreserveReferencesHandling

   = Newtonsoft.Json.PreserveReferencesHandling.All;

 Once that's done, you can run your project, call the API and get JSON returns that you can parse in your client. In other words, you're up and running.

 

Mar 21
Creating a Tag Cloud in ASP.NET MVC

​Tag Clouds are now very commonly used on blogs, wikis, and all over the place. The essence is that you add multiple tags to each article, post, or item. The Tag Cloud lists all the tags in the entire content. The size of each tag in the cloud shows how often it is used - frequent tags are displayed in a large font. This immediately directs the visitor to important content. Here's an example which, quite by chance, is the one I built recently.

So here's a recipe for a tag cloud in an MVC site. It's not rocket science, people, but it might help a few humble web devs out there. Hope so.

Underlying Database and Architecture

So, in this example, the content consist of photos but it could be any kind of content. Each photo has a property called keywords. This is just a string property. We're going to save lots of tags with each photo by separating them with commas (genuis!).

I'm using Entity Framework in Code First mode, which means I write code describing the database objects and EF creates them in the database when I run the project. Here's the code the creates the photo class:

public class Photo
{
    //PhotoID
    //Primary key
    [DisplayName("Photo ID")]
    public int PhotoID { get; set; }

    //Title
    [Required]
    [DisplayName("Photo Title")]
    public string Title { get; set; }

    //ImageFile
    [DisplayName("Image File")]
    public string ImageFile { get; set; }
 

    //Keywords
    public string Keywords { get; set; }

} 

Simple so far.

I'm using the service/repository pattern, which means that I split my MVC model into two layers: all the data access code goes in the repository class, all the business logic goes in the service layer. For a Tag Cloud, I'm not going to do anything in the repository code: in fact the repository just needs to return a queryable collection of all the photos in the database. Here's the code from the repository that makes sure that's true:

public class PhotoRepository : DbContext, IPhotoRepository
{

   //This line tells EF to create a database table to store photos
   public DbSet<Photo> Photos { get; set; } 
 

   //Queryable collection of all photos that implement the repository interface
   IQueryable<Photo> IPhotoRepository.Photos
   {
      get { return Photos; }
   }

}

That's it for the repository. If you're familiar with writing MVC models that use EF, this should be very familiar to the point of tedium.

Service Layer: Getting All The Keywords

So, in the service layer we want to build a method that returns a list of all the keywords in the database. That's a little more challenging than it sounds, because the keywords field contains many keywords for each photo and the only thing that separates them is a comma. We want to make sure that the returned list contains the keyword "Banana" only once, even if "Banana" is used fifteen times in the content.

However, the frequency of the word "Banana" is important, because it tells the Tag Cloud how large to display the keyword. So we need to return an integer with each tag.

One last thing: it'll help if we return keywords in alphabetical order.

We can do all of these things by returning a SortedDictionary object:

public SortedDictionary<string, int> FindAllKeywords()
{
    //Set up variables to hold all keywords
    SortedDictionary<string, int> allKeywords =

       new SortedDictionary<string, int>();

 

    //First get all distinct keyword field contents

    //in the photos. Each one is a comma-separated

    //list of keywords.
    IQueryable<string> distinctPhotoContents =

       (from p in repository.Photos
        where p.Keywords != null
        select p.Keywords).Distinct();
 

    //Get the keywords in the photos
    allKeywords =

       listKeywordsInContent(distinctPhotoContents);

    return allKeywords;

In the above code, we start by setting up the empty allKeywords dictionary, which we will fill and return. Then we use a LINQ query to get a list of distinct keywords fields in the database. The Distinct clause just stops us from wasting time when two photos have the same list of keywords. Remember that each keyword value is actually a comma separated list of multiple keywords. So we can't just return that list. Instead we'll pass it to the listKeywordsInContent method which will parse each comma separated list. We return the results from that method. So this FindAllKeywords method can be used by a controller to get a list of all the keywords with their frequencies of use.

Service Layer: Parsing the Comma-Separated Lists

Let's focus on that listKeywordsInContent method: We send that method the queryable list of keyword values from the database. We know that each entry in the list is actually a comma-separated string of multiple keywords. So this method should do the following:

  1. Loop through the input list.
  2. For each entry, separate each keyword by using commas as delimiters
  3. Trim any white space and convert the keyword to a consistent case.
  4. For each keyword, check whether it has been added to the results list.
  5. If it hasn't been added already, add it.
  6. If it has been added already, increment the frequency integer for it.
  7. Return the result.

Here's the code:

private SortedDictionary<string, int> listKeywordsInContent(IQueryable<string> content)
{  

   //Set up variables to hold separated keywords
   string[] separatedKeywords;
   string fixedKeyword;
 

   //Loop through the comma-separated lists of keywords
   foreach (string list in content)
   {
       //Separate the individual keywords using comma separators
       separatedKeywords = list.Split(

          new Char[] { ',' });
 

       //Loop through the separated keywords
       foreach (string keyword in separatedKeywords)
       {
          //Trim white space and convert to Title case
          fixedKeyword = CultureInfo.CurrentCulture.TextInfo.ToTitleCase(keyword.Trim());

          //Check that the keyword has not

          //already been added
          if (!existingKeywords.ContainsKey(fixedKeyword))
          {
             //Add the keyword and set the value to 1
             existingKeywords.Add(fixedKeyword, 1);
          }
          else
          {
             //The keyword is already added.

             //Increment the value
             existingKeywords[fixedKeyword]++;
          }
       }
    }
    return existingKeywords;
}

So this is the magic that turns all those comma-separated lists into a single sorted dictionary of keywords, each with an integer that describes how often they are used. This is just what we need to render our tag cloud.

Controller: The Hard Part, Not

We've done all the hard work now, and we can write a very simple controller action that gets that dictionary of keywords and passes it to a View to use as a model:

public ViewResult TagCloud()
{
    //Get the keywords

    SortedDictionary<string, int> allKeywords

       = service.FindAllKeywords();

    return View("TagCloud", allKeywords);

}

I always am reassured when my controller actions are as simple as this, because I like them to be concerned only with getting the right model and passing it to the right view.  You sometimes need more but I am of the school that says business logic doesn't belong here.

View: Rendering Sized Links

Remember that the model we've just passed to the view is a sorted dictionary. In that dictionary each entry has two properties: the key, which is the keyword itself, and the value, which is the frequency. In the view, we're going to loop through that dictionary and use the key to render the keyword and the value to size the keyword. Like this:

@model SortedDictionary<string, int>

<div id="tag-cloud">

@foreach (var item in Model)

{

   @* Render a span and set the size by the frequency

   with which the keyword is used *@

   <span style="font-size: @(item.Value * 0.8)em;">

      @* Render a link to the keyword *@

      @Html.ActionLink(item.Key, "KeywordGallery", "Photo", new { keyword = item.Key }, new { })

   </span>
}
</div>

Just one last thing to point out here: For each keyword in the list, we need to render a link. Remember that in this example, the content is a database of photos. So when a visitor click on the keyword "Banana" (and there are often a lot of photos of bananas in any photographers website) he'll expect to see a gallery of all the photos with the keyword "Banana". That's why the Action Link in the above code renders a link to the KeywordGallery action in the Photo controller.

So, there you go. A tag cloud. Hope it helps someone! 

Mar 20
Learning About Test Driven Development in ASP.NET MVC

So Test Driven Development (TDD) is by no means new and it is also very well established on planet ASP.NET ever since MVC arrived. I've been aware of it for a long time but only recently got properly to grips with its foibles etc. as I was writing a chapter of a course about it. Also, I finally started to use it properly on a dev project. I noticed that many of the explanations out there are somewhat complicated and arcane whilst others are much better and give you a clear understanding of what isn't actually all that complicated in reality. This is because there are several interlocking concepts, including unit testing, decoupled code, dependency injection, and mocking. Also the terminology is not entirely consistent.

What I want to do here is point you all at the better explanations of the concepts so you can skip all that pain. Also I want to give a highlevel overview of the concepts and how they fit together, which should make learning the whole thing simpler. This explanation uses ASP.NET MVC applications as examples, but the broader concepts apply elsewhere as well.

TDD

TDD itself is pretty straight-forward:

  • You want to create a new bit of software. You have some use-cases, requirements, and so on.
  • You choose a small piece of functionality to build.
  • You write some tests that will prove it does what you want it to do. Notice that you do this before you write any functional code.
  • These tests fail, because you haven't written any functional code yet.
  • You write functional code so that the tests pass.
  • Repeat.

I know - I've simplified it a lot. TDD is actually a project development model like Agile - the two are closely related. To learn this in greater depth, have a look at this Introduction to Test Driven Development.

Of course, you can't do TDD until you know how to do unit tests...

Unit Testing

So, you've planned to TDD, or maybe you're just doing Agile development or using some other model but you understand how unit tests can significantly increase the reliability of your code. In my experience, the best thing about unit tests is how they help to spot problems that arise later in a project. For example, say you have written a shopping cart for your ecommerce web site. You've got a bunch of unit tests that check it works as you expect it to and everything's fine. You move onto something else. Later, a team member makes a change to the shopping cart code (I don't why, maybe regulations have changed or a new feature is on the cards). If his change breaks something, you've got a good chance that Visual Studio will alert you very quickly, the next time the tests are run. Your chances are better if you have an encyclopaedic set of tests.

This kind of thing increases the confidence you have in your codebase, and that can only be a good thing. Unit tests take a lot of work but most developers, and some project managers, who have experience of them think it's easily worth it.

The best resource I found to learn about unit testing, and in particular, unit testing in ASP.NET MVC, is this article in MSDN Magazine. It is quite an old article and covers the prehistoric MVC 3, but all the principles apply to MVC 4 and 5 as well so stick with it.

Repository and Service Layers in MVC

One thing that the MSDN article on unit testing is hot on is the layered architecture you should use to make your code more testable. Although the article emphasises the necessity for repository and service layers to underlie your MVC controllers, it doesn't really make it clear why they help. So remember that unit tests are only supposed to test one thing (the clue is in the name). So if you want to test some property or method in the service layer, you don't want the code to have to connect to any database because that would be testing two things: the code and the database connection. If the test fails it then becomes harder to see what went wrong.

If you have a repository layer underneath the service layer, you can run a test that uses a pretend repository, instead of the real one. This is called a mock and is just an object that looks like and acts like a repository to the service layer. So you create a real service object with the code you want to test, you pass it a mock repository, then run the test and check that you have the right results.

The key to this is that the mock repository passes something reasonable to the service layer. You can test that the service code did the right thing to this something without knowing whether the mock repository connected to a database or not.

At the risk of boring us both, let's think about a specific example: You want to test a method called ApplyDiscount. This applies a 25% discount to a product's price. ApplyDiscount calls a method in the repository called GetProduct. In the deployed application, GetProduct looks up a product in a database. In the test, you don't care whether GetProduct returns a product it found in the database or one it made up at random, you only care that ApplyDiscount works the way you want it to. So you create a mock repository and tell it that whenever you call GetProduct, you want it to return a Yoyo with a price of £40. Then in the test, you check to see that ApplyDiscount returns a discounted price of £30. If the test passes, you can go to the pub/bar.

One thing will confuse you if you read multiple articles on this topic: the terminology is not universally consistent. For example, people disagree on their definition of "repository layer" and "service layer". I suggest you stick to the MSDN article, in which the repository contains all the data access code and the service layer contains all the business logic (I'd much rather have this in the service layer than in the controllers, which should really just get model objects and pass them to views or take other simple actions). This means that your MVC model is split into repository and service layers. If you disagree on this terminology, well that's life. Just make sure you know how your terminology maps to what others are using.

Decoupling, Mocking and Dependency Injection

So I suppose you noticed that above I said "you can run a test that uses a pretend repository, instead of the real one" as if that was like tying your shoelaces. Well it turns out that there is an easy and a difficult way to do this.

Before you can do it at all you must decouple your classes from each other. This is a fancy way to say that you write service layer classes that can use either a real repository or a mock respository. This is also helpful because the service layer could then use a SQL repository or an Oracle repository or some other repository to store data. You do this by creating an interface to define the repository - all repositories must implement the methods and properties in the interface. Then, in the service layer constructor, you tell the service layer that whenever it gets created a repository will be passed to use. You do the same things with MVC constructors, by the way, except that they expect to receive a service layer object.

In the real application the service layer gets a real repository to work with when it gets created. The constructors get a real service layer object to work with. You can control which real objects they get by installing and configuring a dependency injection framework. Ninject is one example. I personally prefer Unity, not least because there is a very thorough explanation of it in MSDN, which I think is the best place to learn that and a lot better than many of the frankly confusing explanations I've struggled through.

For a unit test, you must create a mock repository and pass it to the real service layer object you are testing. Or you could create a mock service layer object and pass it to the real contoller you are testing.

You can create such mock objects manually - by which I mean you write an class that implements the correct interface but returns made up values instead of information from a database (or some such place). This sounds easy but in reality is the difficult way to do this. You end up working so hard to create such mocks that testing becomes a nightmare.

Fortunately you're not the first person to face this challenge and to help you the gods, by which I mean NuGet, have given you mocking frameworks. In a unit test, you can use the mocking framework to create an object that implements the right interface, for example the repository interface, so you can use it to test the service layer. The mocked object won't return anything by default. Instead you tell it what you want it to return for specific calls. For example, you might tell it to return a Yoyo with a price of £40 whenever the test calls the GetProduct method (this should ring a bell). There are lots of mocking frameworks - I use Moq.

So there's your highlevel explanation. I really hope this helps you navigate a minefield of contradictory web pages and get the knowledge to start doing full-blown unit testing and TDD in all your projects. It's not as hard as it seems.

Feb 23
Using JQuery in Sandboxed Web Parts

​Hi All,

You've probably noticed, as I have, how useful the JQuery library is for just about all web development. Yes, you can use it in your SharePoint client-side code and in fact I would always consider it because it tends to accelerate your development. 

In this post I'll describe a simple way to use JQuery in a sandboxed web part. That means you could build it into a user solution that anyone can run in the sandbox and therefore can be deployed without the approval of farm administrators (depending, obviously, on the way roles are assigned in your SharePoint farm). If you're developing a custom solution that you want to be widely used, you should make it a user solution if at all possible. Otherwise you'll have to convince the farm administrators that it's reliable and secure. 

Adding JQuery to Your Web Part

To use JQuery in a web part, you must do three things:

  1. Render a <script> tag that includes the JQuery library in the page.
  2. Render JavaScript function that uses the JQuery functionality.
  3. Render something that calls the function, often from a UI component such as an <a> tag.

There are lots of ways to do that, some of which I'll talk about at the end of this entry. The simplest method for a stand-alone web part would be to use the Visual Web Part project template in Visual Studio. That way you can just type your script tags, functions and UI components into the web part's markup. However, you can't deploy a Visual Web Part in the sandbox, so we need something else.

Instead, you can create a new Web Part (i.e. a non-"Visual" web part) project and override the Render method like this:

protected override void Render (HtmlTextWriter writer)
{
   //All the subsequent code goes here.
}

Then you use the writer object to do your rendering. Firstly, the script link to include JQuery in the page:

writer.AddAttribute(HtmlTextWriterAttribute.Src, "http://ajax.microsoft.com/ajax/jquery/jquery-1.6.3.js");
writer.AddAttribute(HtmlTextWriterAttribute.Type, "text/javascript");
writer.RenderBeginTag(HtmlTextWriterTag.Script);
writer.RenderEndTag();

Notice that I've linked to JQuery hosted at Microsoft. You could choose another location such as jquery.com with no problem, or you could host JQuery somewhere in your SharePoint farm and link to that. Also the link rendered by the above code is good for the development phase. When you're finished coding, you should use a minimised version of the library, such as jquery-1.6.3.min.js, to get the smallest script payload possible and therefore accelerate page loading.

Next, create a script that call JQuery and store it in a string variable. Here's an example that displays items in the Announcements list, which it gets through the ListData.svc web service:

string functionJavaScript = @"
   function getListItems() {
      //Formulate a URL to the service to obtain the
      //items in the Announcements list. You must ammend
      //this URL to match your site and list name
      var Url = 'http://intranet.contoso.com/' +
         '_vti_bin/ListData.svc/Announcements';
      //call the jQuery getJSON method to get the
      //Announcements
      $.getJSON(Url, function (data) {
         //Fomulate HTML to display results
         var markup = 'Announcements:<br /><br />';
         //Call the jQuery each method to loop
         //through the results
         $.each(data.d.results, function (i, result) {
            //Display some properties
            markup += 'Title: ' + result.Title + '<br />';
            markup += 'ID: ' + result.Id + '<br />';
            markup += 'Body: ' + result.Body + '<br />';
         });
         //Call the jQuery append method to display the HTML
         $('#JQueryDisplayDiv').append($(markup));
      });
   }";

You can see here how easy the JQuery functions, such as getJSON and each, are to use. Now you must render that script in the web part:

writer.AddAttribute(HtmlTextWriterAttribute.Type, "text/javascript");
writer.RenderBeginTag(HtmlTextWriterTag.Script);
writer.Write(functionJavaScript);
writer.RenderEndTag();

The final stage is to render the user interface. In this case, I'll render a link for the user to click. That calls the getListItems function we just rendered. Notice that the getListItems function displays its SharePoint items in a <div> with ID "JQueryDisplayDiv". So I must remember to render that <div>:

//Render the display html.
//First an h2 tag
writer.RenderBeginTag(HtmlTextWriterTag.H2);

//Then a hyperlink that calls the JavaScript method
writer.AddAttribute(HtmlTextWriterAttribute.Href, "javascript:getListItems();");
writer.RenderBeginTag(HtmlTextWriterTag.A);
writer.Write("Click Here to Obtain List Items");
writer.RenderEndTag();

//End the h2 tag
writer.RenderEndTag();

//Render a div to display results
writer.AddAttribute(HtmlTextWriterAttribute.Id, "JQueryDisplayDiv");
writer.RenderBeginTag(HtmlTextWriterTag.Div);
writer.RenderEndTag();

And that's it! Build and deploy the project, then add the new web part to a SharePoint page. When you click the link, the items are shown.

Other Methods

Many bloggers advise you to place the JQuery link the the master page, either with simple markup or via a Delegate Control. That's certainly a good idea because it makes JQuery available throughout the site in one step. Then in your web parts, field controls, or other components you can skip stage one. But you need write access to the master page gallery to do that, plus you usually need someone to approve your master page changes (I'm certainly very careful about who can alter my master pages and what workflow needs to complete before they reach production).  

The advantage to taking the above approach is that everything's wrapped up in a single web part that you can encapsulate in a user solution for easy, sandboxed deployment.

Feb 07
Styling a SharePoint Online Blog

​As you can see, I've just updated this blog with a new look and feel. It's simple and straight forward, and probably won't win any web design prizes, but it is a good example of how to use a custom master page and style sheet to impose your own design on SharePoint.

There are plenty of resources and community content on how to do this. In this post I'll just highlight some SharePoint specific issues I came across and explain how I solved them. Hopefully, this might solve some problems for you.

Designing for SharePoint 2010

The general approach is like this:

  1. Create a look and feel in a graphic design package. I used Microsoft's Expression suite but there are plenty of others. 
  2. Create a mock-up of this in HTML and CSS. You'll need to fix fonts, colours, logos, and other graphical elements such as gradients etc. Up to this point, we've done nothing SharePoint-specific and, in fact, this is exactly how every web site is designed and implemented. Next come the SharePoint bits.
  3. Create a SharePoint master page. You should obtain a starter master page designed especially for SharePoint 2010. You blend your mock-up HTML with this to create a look and feel that SharePoint can use. I used, and highly recommend, Randy Drisgill's master pages that are on codeplex. For this, I used Visual Studio, because I wanted a user solution with all the bits deployed exactly where I want them, but you can also do this in SharePoint Designer.
  4. Lots of testing, lots of fixing before deployment. In particular, you should watch out for Microsoft's style from core.css that break your design. There are some examples below.

Here are some good sources of information:

Things That are Unique to This Blog

I had a few specific requirements:

  • I wanted to style a blog hosted in Office 365 and SharePoint Online, not SharePoint Server on-premise.
  • I have a small-business subscription to Office 365, not an enterprise subscription.
  • The blog is a sub-site of the default SharePoint Online website, which is the top-level site of the site collection. In a small-business subscription, you can't create new site collections.
  • I like to use Visual Studio to create easily-deployable user-solutions.

Here are some of the problems I came across and how I fixed them.

No Publishing Feature or Page Layouts

A small business subscription to Office 365 is very good value and I do recommend it, but there are a few things you don't get that can leave you a little frustrated. One of these things is the lack of the SharePoint 2010 Publishing feature. This usually appears at the site collection level and Enterprise customers can choose to enable it but us lowly small businesses see nothing. It's a big problem for this task, because the Publishing feature enables Page Layouts and those make things much more flexible for SharePoint Web Content Management.

All the general features of your design should go on the master page, such as branding, headers, footers, navigation components and so on. Within this, you can use Page Layouts to massage the actual content of each item. For example, you can render a description field in your own way or place a graphic on the right. Since each Page Layout is associated with a content type, you can create different layouts for news stories, technical articles, and other types of content.

Without them, things are much less controlable. Basically all the content of the item being displayed is rendered in a single <asp:ContentPlaceHolder> tag - the one with the id "PlaceHolderMain". 

The first consequence of this is that accessibility is tricky. In particular, the PlaceHolderMain tag renders all its content as tables, which are not great for those with visual impairment. If that's you, I apologise - not a lot I can do to avoid it I'm afraid. If anyone knows a work-around let me know.

The second consequence is that it's hard to control precisely the content rendering within each item. However, you can achieve quite a lot by overriding Microsoft's styles. For example, I wanted the date and text of each story to appear in a white box with a black border. As you can see, I partially managed this by overriding the ms-rightblogpost style in my custom stylesheet. However you can also see that the date is outside the white box. This is annoying but I decided to compromise. Here's the style that does it:

.ms-rightblogpost
{
    width: 100%;
    background-color: white;
    border: 2px #676767 solid;
    padding: 10px;
}

Anonymous Access to the Blog

This is an issue that has affected a lot of SharePoint Online users. The default situation is that anonymous users can access the home page of the blog but when they click on anything, such as an individual entry or a category name, they are asked to logon. Obviously you want these users to get read access to most content, so how can we enable it?

Martin Hatch has a really excellent explanation of the problem and a user solution you can deploy to fix it.

However, I prefer Adrian Fiechter's solution for two reasons. Firstly, because it uses feature receivers to run its code so you don't have to mess around with a Web Part. Secondly, because it includes a feature that enables moderated comments. We've enabled this solution and it works a treat.

Hiding the Ribbon for Anonymous Users

This being a public blog, I was uncomfortable with the ribbon appearing at the top of the page for everyone. I should be the only one to see it when I'm logged on. You can hide it by using a <SharePoint:SPSecurityTrimmed> control. This control hides the markup within it if the current user does not have the permission you specify in the PermissionString attribute. That's pretty easy. However, you must take care where you place the SPSecurityTrimmed control. If you place it around the s4-ribbonrow div, you break scrolling for anonymous users for some strange reason. In fact you should place it just within the s4-ribbonrow.

That works pretty well, but you do end up with a dark blue band at the top of the page where the ribbon would be. To hide this, use CSS to style the s4-ribbonrow like this:

body #s4-ribbonrow { min-height: 0 !important; height: auto !important; }

This will hide it when there's no content in it, in particular when the user is anonymous and the SPSecurityTrimmed control is concealing the ribbon. For more details of this technique, have a look at Kyle Schaeffer's blog.

Microsoft Styles to Override

Several of the styles imposed by core.css caused me trouble. The first three were all styling comments and imposed a minimum width of 775px. I fixed them like this:

.ms-commentsempty { width: 100%; }
.ms-commenttable { width: 100%; }
.ms-CommentBody { width: 100%; }

There was also exactly the same issue with the .ms-PostWrapper class, so you need this:

.ms-PostWrapper  { width: 100%;  }

I also wanted to ensure that the title of posts used the same font as the title of the page:

.ms-PostTitle  { font-family: Garamond , Times New Roman, serif;  }

There's also a Microsoft style that imposes a font-size of 8pt on the text in a post. I restored it this way:

.ms-PostBody  { font-size: 85%; }


Finally I hid the web part page description, which was duplicating what I saw in the page sub title:

.ms-webpartpagedescription  {  visibility: hidden;  }


Hope all this helps!

Jun 20
Exchange Online for webOS smart phone owners

​Hi All,

So no doubt you've noticed that the Web Dojo website and this blog are now hosted on SharePoint and, given that I never got time to learn Drupal thorouhgly, this is a mighty good thing for me. Web Dojo has joined the Office 365 beta so we get Exchange Online, SharePoint Online, and Lync Online. Currently this is free but, post-beta, it should be £4 per month if they keep their promise. That made it a no-brainer for us. I'll be posting my experiences with it here along with all the other SharePoint bits a pieces.

Exchange Online and webOS Phones

It's fair to say that Palm hasn't shifted as many of these as they wanted so the number of you that own a webOS smart phone, like the Pre, Pixi or Veer is probably pretty small. However, I do and I've upgraded to the Pre 2 because they're really pretty good, especially if you're a bit of a hacker like me. Just don't ask me about the webOS HTML5 Audio object. Als HP has a webOS tablet and two new phones so that number may increase, you never know!

Anyway, if you're a webOS user, you'll know that you can set up your phone to connect to your Exchange server. If you do, your appointments, tasks, and contacts automatically show up on the phone and life is very much easier. 

webOS and Exchange Online Configuration

The good news is that it works for Exchange Online as well as for the usual on-premise Exchange servers. However, you'll to install one or two extra certificates in the phone, otherwise you just get logon errors. Here are my steps to configure this. I'm using Exchange Online as part of Office 365 but I think this'll work for BPOS as well. These steps are using Firefox on the PC, but the steps will be similar for other browsers:

  1. On you PC, in Firefox, open the Outlook Web Access page in the browser. The address will be something like http://abcdef01234.outlook.com/owa
  2. Go to the Options tool and click Advanced.
  3. Click the Encryption tabs and then click View Certificates.
  4. Find the GTE Corporation section.
  5. Export both the GTE Cyber Trust Global Root certificate and the Microsoft Internet Authority certificate. If there are more than one of either, export the one that hasn't expired!
  6. Save them somewhere convenient in X.509 Cerficate (PEM) format.
  7. Email them to a mail account that you can already access on your phone (if you don't have any other mail account, you could copy them to the USB drive on your phone instead).
  8. On the phone, open the mail. Tap the first certificate.
  9. The Certificate Manager tool opens and asks you if you trust the new certificate. You do.
  10. Repeat steps 8 and 9 for the second certificate.
  11. Go to the Email app, then open Preferences and Accounts.
  12. Add an Email account, and choose Manual Setup.
  13. Select Exchange (EAS).
  14. Enter your email address on the Exchange server.
  15. In the Incoming Server box, type the server name you used in step one. Note: make sure you use https: and leave the "/owa" bit off. It will be something like: https://abcdef01234.outlook.com
  16. Leave the Domain box blank.
  17. In the Username box, type you full email address.
  18. Enter the password.
Then everything should work as expected.

Hope this helps someone!
Dec 02
A Field Control Shortcut

​More SharePoint WCM issues - this time something that came up for a recent customer project. Field Controls are used in SharePoint WCM sites to enable customised fields within Web pages. Let's say you have an unusual type of data to display - you can create a custom Field Type in SharePoint to store it and add it to the Content Types for pages that will display it. Then you provide a custom display of the Field Type for site visitors and a custom edit experience for authors by writing a Field Control. You add the Field Control to your page layout and users can start viewing and changing the values.
 
Some documentation also points out that you can provide a custom editing experience with a standard display by create a Field Control without a new Field Type. But our case was slightly different. We had simple fields to edit - we wanted to enable authors to set meta tags for each page when they edit the page itself. So we used a standard SharePoint Field Type - Single Line of Text. We also wanted a pretty standard editing experience - just a text box. However we needed a custom display behaviour, to render the values as <meta> tags in the page header.

The Usual Approach to Field Controls

Firstly, if you need full details on writing custom SharePoint Field Types and Field Controls, I recommend reading Chapter 10 in Andrew Connell's book "Professional SharePoint 2007 Web Control Management Development" from Wrox Press. Or a Google search - there are plenty of blog entries and so on.
 
By the way - Field Types and Field Controls haven't changed much in SharePoint 2010 so this whole discussion applies to both versions.
 
Usually you must develop:

  • A Field Type. This ties all the bits together and is coded in a .DLL usually deployed to the GAC.
  • A Field Value Class. This defines the data structure for any Field Type that is more complex than a simple string. Again this is in a .DLL in the GAC.
  • A Field Type Definition. This is an XML file you deploy into the 12 hive (14 in SharePoint 2010!) that tells SharePoint about the Field Type. It has a pointer to the Field Type Class and some other information.
  • A Field Control. This class, again in a .DLL in the GAC, provides the custom editing experience for authors and editors.
  • Optionally, a Field Rendering Control. This is actually an ASP.NET user control in a .ASCX file deployed to the 12/14 hive. The Field Control uses this to render the Field Type in Edit mode.

In Andrew's book and in other documentation, it's mentioned that you can also use existing Field Types with custom Field Controls when you want to provide a new editing experience without a new kind of data. In this case, because you're going to use an existing Field Type you just have to write the Field Control and the Field Rendering Control.

Our Situation

Our case was slightly different: we wanted to use a standard editing experience but a custom display. Built-in fields are usually rendered with FieldValue controls, but that isn't suitable for us because we want to place our value text inside an attribute of the <meta> tags, not its main content, like this:

<meta name="keywords" content="This is where we wanted to place the value">

Also, of course:
  • In display mode, the whole <meta> tag has to be within the <head> tags, not in the page body.
  • In edit mode, the Field Control has to be in the page body, where editors can see it and change the contents.
So our solution was like this:
  • We used the Single Line of Text field type to create two fields in the right Content Types. These will store meta data keywords and description. You could create more fields for more meta tags if you need them.
  • We updated the list views to display these. That's optional.
  • We created a Field Control to edit these. This control displays nothing when in display mode.
  • We created an ASP.NET Web control to generate the <meta> tags and put the values into the content attribute in each case. We placed these within the <head> tags of the master page, but you could also put them inside the <PlaceHolderAdditionalPageHead> content tag on the relevant page layouts.
I've described this case because it's just slightly different from any situation I've seen documented - i.e. a standard edit experience with a custom display. Hopefully it might help some people with their projects.
 
The last thing to say is that you should certainly enable authors to edit meta tags on a page-by-page basis because Search engines rank your site higher when they see that meta tags are different on each page of your site. We had this problem to solve because the SEO company the client had hired specified it as a requirement. I'd certain add this feature to any SharePoint Internet-facing site.

Jun 16
Content Deployment Possibilities

Hi All,

Content Deployment is a useful service in SharePoint designed to help out in Web Content Management situations. It''s how you get your content from one site to another whether it's from authoring site to staging site, staging site to production site, or some other route. But you can use it in many more situations than that and I thought I'd get you chewing over some other possibilities.

Standard Content Deployment

The out-of-the-box service is pretty straight forward to use. All you do is:

  • Configure your farms. On the destination farm you must allow incoming content deployment jobs and set a few other parameters.
  • Create a path. This defines where you deploy from and to.
  • Create a job. This defines what you deploy and when you deploy it.
You do all this in Central Administration. By the way, there's no reason why the source and destination sites have to be in the same farm. They would be in the usual scenario with a separate production farm in the perimeter network but keep in mind that other possibilities exist!
 
You'll notice that, when you create a path, a "Quick Deploy" job is automatically created. Certain users can use this job to get urgent content deployed rapidly without waiting for the next deployment, which would often run overnight.

Using PowerShell for Content Deployment

SharePoint 2010 actually adds not very much to Content Deployment. All of the above, for example, is the same in 2007 and 2010 and works well. One new feature is that you can deploy a content database snapshot instead of a site. However, PowerShell is new and comes with Content Deployment configuration cmdlets. These add some extra possibilities.
 
The cmdlets are pretty much what you'd expect. To see them type this command (after you've made sure the Microsoft.SharePoint.PowerShell snap-in is in place):
 
Get-Command –pssnapin  Microsoft.SharePoint.PowerShell *SPContentDeployment*
 
You can also see them here, although what those eDiscovery cmdlets have to do with Content Deployment is a mystery.
 
To summarise these, for both Paths and Jobs we have:
 
  • A Get cmdlet to list them or get a specific one.
  • A New cmdlet to create and configure one.
  • A Set cmdlet to reconfigure one.
  • A Remove cmdlet to delete one.
There's also a Start-SPContentDeploymentJob cmdlet to set things running. Notice that there's no cmdlet to configure the farm settings for Content Deployment so you'd have to do this manually (or write .NET code).
 
So why would you want to do this in PowerShell rather than Central Administration, given that most SharePoint administrators are pretty keen to avoid any kind of command line? My opinion is that if you''re setting it up once, use Central Administration. It'll be quicker and a lot easier than working out all those options that are necessary. It, on the other hand, you're likely to be repeating this configuration or you have more than let's say 20 objects to configure, it might be worth writing a script. Have a think about these kind of situations:
  • Developers frequently create development sites from scratch and want actual content in their environments to test their code against. Because this task is repetitive you'll save time in the long run with a script. By the way, Content Deployment won''t deploy the custom components developers write such as Web Parts, timer jobs, and event receivers. You'll have to use Solution Packages for those.
  • Users create their own sites and want to deploy content from them. E.g. it might be standard practice to create a new site for each project. You don't want to have to set up paths and jobs manually for each one.
  • You want to create 100 paths and 150 jobs. It might be worth putting the configuration values in, say, an XML file and writing a script that reads it and configures Content Deployment from it. You also might have to repeat this configuration later.
So you get the idea: as with many configuration tasks in SharePoint 2010 you can ease repetitive or complex tasks by writing a script.

The Content Deployment APIs

The real possibilities come when you consider writing .NET code. Here you can use the Microsoft.SharePoint.Deployment APIs to get pretty much any content from any place to any other place (as long as it's in SharePoint of course) so it has applications way beyond Web Content Management applications.
 
For example, all the following features of SharePoint actually use Content Deployment APIs to move content:
  • Copy and move operations in Site Manager
  • Variations
  • Import and Export operations in STSADM.EXE
You can use the Content Deployment APIs like this:
  1. Create an SPExportSettings object and set properties to configure the export operation.
  2. Pass the SPExportSettings object to a SPExport object and call its Run() method. This exports the content to a set of XML files in a temporary location. If you have configured the export to use compression, they are then placed into a CAB file.
  3. On the opposite end, create a SPImportSettings object to configure the import operation.
  4. Pass the SPImportSettings object to a SPImport object and call its Run() method.
So it's not difficult to use. Bear in mind also that PowerShell can use any .NET assembly, so you can call all these objects from a script if you don't want to use a compiled solution.
 
So finally, let's think about possible uses of Content Deployment APIs. Here are some scenarios:
  • You want to move content in an document management system, for example, you want authors to write content in one site, and readers to access it in another. The reason I mention this is that the Content Management service is usually used in WCM sites, but you should consider using the Content Management APIs whenever you need to move content within SharePoint, including in ECM, Records Management, Portal, Collaborative and all other kinds of solution.
  • You want to move content instantly and manually, without waiting for the next deployment job. Even the Quick Deployment job takes up to 15 minutes. Not quick enough for all sites.
  • You want users or administrators to be able to deploy content wherever they are. For example you could create a Web service that triggers content deployment and then call it from an application on a smart phone.
  • You want to deploy content in response to a SharePoint event. For example, a new document is added to a folder, certain checks are run and then the document is deployed. There are a lot more events available in SharePoint 2010, so this is a very versatile scenario.

More Info

These are just some of the possibilities for Content Deployment. Hopefully I've shown that it has much greater significance in SharePoint than the relatively specialised use it's usually put to in WCM solutions. If you want to use it seriously in your organisation, I'd certainly read Stephan Gossner's really excellent set of Blog articles about it. Starting with the Deep Dive.

Hope you found this useful!
Nov 27
SharePoint 2010 and Silverlight in Web Parts

​Hi All, long time no blog - I've been really busy!
 
One recent project has involved Silverlight and SharePoint - a really hot topic both for SharePoint 2007 and SharePoint 2010. This will result in a Code Gallery resource soon, so I won't post code here, but I thought you might find some pointers helpful if you have to code anything in this area.
 
As you probably know, there is a Silverlight Web Part built into SharePoint 2010, and you can use this to host any Silverlight application within a Web Part page. So far, so good, but what about if we want to connect those Silverlight Web Parts for communication like you would connect other Web Parts. E.g. like you connect a view Web Part to a filter Web Part. It turns out that the built-in Silverlight Web Part can't do this and, at least in Beta 2, it can't do much other than display the Silverlight app.
 
One approach is to discard the built-in Silverlight Web Part and write a custom version. This gives you lots more flexibility and is pretty simple, given that you just have to render an <object> tag on the client side. You can do this in the RenderContents() method.
 

Connecting Silverlight to Silverlight on Web Part Pages

 
To connect one Silverlight Web Part to another Silverlight Web Part, you can use Silverlight's Local Connection APIs. You set up a LocalMessageReceiver object in one application and a LocalMessageSender object in the other, telling it what receiver to send messages to. In a Web Part page, you'd want users to be able to configure this receiver name, bearing in mind that there might be more than one instance of the receiving Silverlight application on the page.
 
Our solution was to create a custom Web Part that rendered the Silverlight application. The Editor Part for the sender application enabled users to choose which of the receiver applications on the page to send messages to.
 
How to tell Silverlight which receiver the user had chosen? You can Initiation Parameters within your Silverlight <code><object></code> tags, like this:
 
<code><param name="InitParams" value="RenderTheReceiverNameHere" /></code>
 
Silverlight can read these parameters in its Application_Startup event handler by checking the StartUpEventArgs.InitParams collection. Simple!
 

Connecting Silverlight Web Parts and Other Web Parts

 
What about sending messages from a Silverlight application in a Web Part to a non-Silverlight Web Part? Or vice versa? The trick here is to learn about doing client-side connections with SharePoint Web Parts.
 
Client-side connections are not supported by the ASP.NET WebPart class we usually inherit for Web Parts. Instead you must use the Microsoft.SharePoint.WebPartPages.WebPart class. You can read about creating client-side connections in MSDN or in this blog entry.
 
When you do a client-side connected Web Part, you have to render a JavaScript that gets called to send or receive a message. Therefore, to integrate this with a Silverlight application, all you have to do is get Silverlight to call that JavaScript method.
 
So, let's take a Silverlight Application from which we want to send a message to a non-Silverlight consumer Web Part. The consumer Web Part involved no-Silverlight at all and indeed could be a built-in Web Part.
 
As before, we'll need a custom Web Part that renders a the Silverlight application and does client-side connections. We need this to tell the Silverlight application the name of the JavaScript method to call when it wants to send a message, and as before, we can use InitParams to do it. Again, fairly simple, and much more integrated with the SharePoint Web Part Connections infrastructure. So, for example, we can set up connections in SharePoint Designer as we can for other Web Parts.
 
The opposite case, in which a Silverlight application receives a message from a non-Silverlight Web Part, is similar except that the JavaScript that fires when a message arrives calls a public method in the Silverlight application.
 
Hope that helps!

1 - 10Next

 ‭(Hidden)‬ Blog Tools

 About this blog

Alistair

 
Another SharePoint and ASP.NET blog to add to enormous number that already exist. I'm going add anything that I don't think has been covered elsewhere. Therefore there's no particular area that I will focus on, but I will fill in gaps and let the search engines find the right page for you.