Category: Development

23 Jun

Two Hackathons in a week

What a week. Two hackathons (‘hack’+marathon) in a row. That was exhausting.

  • A three day hackathon with my colleges from Arvato Systems and a customer. We’ve used Cognitive services with 8 different programming languages and created great PoCs.









  • The second hackathon was about Azure Stack with Microsoft.

Thanks to all participants, the organisation. It has been fun and a great experience. Now I am looking forward on how the results will influence decisions for follow-up projects.

Besides the work, I enjoyed the opportunity to get to know  you all better and had some interesting networking. Let’s see what events the future brings 😉

7 Jun

Widgets instead of Add-Ins/Apps?

The concept of Add-Ins (formally knows as Apps) in SharePoint puts logic as HTML and CSS to another page. This page is then rendered as iFrame to another SharePoint page. This approach has advantages and disadvantages. You have to decide yourself.

A very promising way to put stuff (or WebParts) onto a SharePoint page is the Widget Wrangler.

More information can be found on

Conceptually Widget Wrangler implementation is based on similar thinking as PnP App Script Part implementation, which was released few years back as part of the PnP patterns (or at the time it was call App Model Samples). Advantages of this model is that you do not have deal with iFrame implementations and functionalities can be fully responsive, where needed. Also implementation of the capabilities is much simpler when your JavaScript is directly embedded to the page rendering logic without additional complexity.

In demo section Bob is showing following topics

  • How to use Widget Wrangler with plan JavaScript?
  • How to use Widget Wrangler with jQuery?
  • How to use Widget Wrangler with KnockoutJS?
  • How to use Widget Wrangler with Angular?
  • How does Widget Wrangler handle multiple instances of same widget in the SharePoint Pages?

I like that. As soon as I’ve some spare time, I will take a close look.

Custom field and UpdateFieldValueInItem()

Recently I was developing a custom field. To store modified values, the UpdateFieldValueInItem method has to be overwritten.

In a normal way of clicking the submit/save button, the method is called and I can adjust the value for the field within the current item. The changes are submitted to the database later.

But what if you want to modify items outside of the current item? Sure, you can do so would you think. But you’ll need to consider the scenario that the user does not click submit/save. The method is called on every postback. The PeoplePicker will cause a postback, when it validates its values. There might be other controls as well, which behave this way.

My problem was that I could not modify items, other then the current, without checking if submit/save was clicked. I ended up checking the form for the control, that triggered the postback. If this value is “SaveItem”, I am good.

/// <summary>
/// Updates the underlying value of the field to the latest user-set value. 
/// </summary>
public override void UpdateFieldValueInItem()
	// do not trigger logic, if a post without submit occures. e.g. by PeoplePicker
	if (Context.Request.Form["__EVENTTARGET"].Contains("SaveItem"))

So if you need to know if the item is currently being saved or if you are within a regular postback, look at your form 🙂

What is Dependency Injection?

This post will help you understand what DI (Dependency Injection) is, and how easy you can adopt the design patterns to create flexible and testable code. For a definition take a look at this Wikipedia article.

From the three types of Dependency Injection

  • constructor injection
  • setter injection
  • interface injection

this post covers the interface type.

The example will deal with a SharePoint list. This list can either be the posts list of a blog, or the comments list. The distinction is necessary, as both lists have different fields, which might be used for querying data later.

class Program
	static void Main(string[] args)
		var list = new SharePointList {SharePointListType = SharePointList.ListType.Posts};

		list = new SharePointList{SharePointListType = SharePointList.ListType.Comments};

class SharePointList
	internal enum ListType

	private string _listTitle;

	public ListType SharePointListType { get; set; }

	public string GetListTitle()
		switch (SharePointListType)
			case ListType.Posts:
				_listTitle = GetListTitleFromSomewhere("Posts");
			case ListType.Comments:
				_listTitle = GetListTitleFromSomewhere("Comments");
				throw new ArgumentOutOfRangeException();
		return _listTitle;

	private string GetListTitleFromSomewhere(string listName)
		// dummy
		return Guid.NewGuid().ToString();

It produces two lines on the console, with different GUIDs as list titles.

What is the problem with this code? Why are we talking about Dependency Injection?

Look at the code above again, and consider this new requirement to your code:

I need the categories list as well..

To fulfill this new requirement, you’ll need to extend the enum and the code within the switch statement. Wouldn’t it be better to just change the Main method instead?

This is an example of the solution with an interface and implementations for each list type:

internal class Program
	private static void Main(string[] args)
		var lists = new List<ISharePointList>
			new PostsList(),
			new CommentsList(),
			new CategoriesList()

		foreach (ISharePointList list in lists)

internal interface ISharePointList
	string GetListTitle { get; }
	void ConsoleWriteLine();

internal class PostsList : ISharePointList
	public string GetListTitle { get { return "Posts"; } }

	public void ConsoleWriteLine()
		Console.WriteLine("Posts: " + GetListTitle);

internal class CommentsList : ISharePointList
	public string GetListTitle { get { return "Comments"; } }

	public void ConsoleWriteLine()
		Console.WriteLine("Comments: " + GetListTitle);

internal class CategoriesList : ISharePointList
	public string GetListTitle { get { return "Categories"; } }

	public void ConsoleWriteLine()
		Console.WriteLine("Categories: " + GetListTitle);

As you can see, the main method now uses three lists. You could even use reflection to get all classes, that implement the interface “ISharePointList” to have even more dynamic within your code.

var lists = new List<ISharePointList>();
var currentAssembly = typeof(Program).Assembly;
var types = currentAssembly.DefinedTypes.Where(type => type.ImplementedInterfaces.Any(i => i == typeof (ISharePointList)));
foreach (TypeInfo typeInfo in types)
	lists.Add((ISharePointList) Activator.CreateInstance(typeInfo.AsType()));

Of course each list needs its own implementation. But you would need to implement different logic anyway.

Dependency Injection is to pass objects, no matter what exactly they are. They are resolved to their actual type later. After they have been injected into something. In this example the “foreach (ISharePointList list in lists){}” will resolve the actual type.

By implementing the interface you can pass completely different objects, if you need them e.g. for testing purpose. Testing SharePoint is not easy, and it might help to pass a dummy which returns something if the basic logic is working.

Migrate SharePoint Blog to WordPress

As promised here, this is a follow-up post with the tool I developed for the SharePoint to WordPress migration.

First, a screenshot:

Migrate SharePoint ot WordPress Screenshot

What is it, that we have to cover with a migration? Copying the posts is not enough. So I came up with this features:


  • Copy posts
  • Copy comments
  • Copy resources like images and downloads
  • Create needed tags and categories
  • Modify links to local resource
  • deal with https, if links are absolute on the source blog and mixed with http
  • Using web services to connect to source and destination
  • URL rewriting (covered by a WordPress Plugin)
  • Delete all content from the destination blog (for migration testing)
  • Replace strings (with Regex)
  • a nice (WPF) GUI


Originally I’ve build a plain console application. Then I thought that a console application would possibly scare some users. And after some time I wanted to do some WPF again. So I created a WPF application, to wrap all the functionality into a GUI. This way it will be easier to use for the folks out there, who do not like black console applications 😉 Since I am using web services to connect to both blogging platforms, the tool can be executed on any client computer. No access to a server session is required.

To start, you obviously need URLs to the source and destination blog, as well as credentials to the destination blog. Since most blogs are anonymous, you’ll probably not need to fill in the source credentials. The migration starts by hitting the “Migrate Content” button. That should be it for using the tool. It will remember the last entries for the URLs and login names, in case you need to perform multiple runs, which was the case for me. The passwords will need to be reentered for security reasons.

It’ll show the progress of all steps in a progress bar and text at the bottom of the application and tell you when it’s finished. Existing categories are mapped to new categories and used as tag, too. I’ve tested the tool with three blogs, one being my own with installed CKS:EBE. There really isn’t much more to configure, to have your blog being migrated to WordPress with this tool.

Some data needs to be modified, before the blog can go live on the new destination. In case of URLs this is necessary to generate valid links within the destination. Fortunately there is a plugin available to do some fancy rewriting. Since WordPress is showing its own smilies, I wanted to get rid of some strings within the posts, that reference smilies as images and replace them with, well, smilies. A txt file within the same directory with the name “replacestrings.txt” will take lines with strings for replacement.

<img.[^>]*/wlEmoticon-smile_2.png"(>| >|/>| />|</img>)*;#:-)
<img.[^>]*/wlEmoticon-sadsmile_2.png"(>| >|/>| />|</img>)*;#:-)
<img.[^>]*/wlEmoticon-winkingsmile_2.png"(>| >|/>| />|</img>)*;#:-);# 

The sample will replace all my old smilie images with plain string before posts are created on the destination blog. The images that were used as smilies in the source, won’t be copied to the destination, because they are not referenced anymore. Otherwise I got many images with smilies. I like smilies 😀

You can stop reading here, if you are a user and would like to migrate your blog and download the tool. As a developer you might be interested on how the tool works…

Technical stuff

The tool gives me a good opportunity to explain some programming tasks, I used for the migration tool. I will explain some of them.

SharePoint offers web services (_vti_bin/lists.asmx), WordPress an XML RPC interface (I used CookComputing.XmlRpc to connect). Those two are used to connect to the blogs. Since the SharePoint web services need Displaynames to connect to the posts and comments list, I first queried them by list template.

Querying SharePoint for List Titles

Use the SharePoint lists web service, to get all lists of a site and search for specific lists like the posts and comments. The lists are identified by the used template. That way I do not have a localization issue.

_lists = new Lists
	Url = string.Format("{0}/_vti_bin/lists.asmx", BlogUrl),
	Credentials = CredentialCache.DefaultNetworkCredentials
XDocument response = XDocument.Parse(_lists.GetListCollection().OuterXml);
IEnumerable<XElement> lists = response.Root.Descendants(XName.Get("List", _s.ToString()));
foreach (XElement list in lists)
	XAttribute listTemplate = list.Attribute(XName.Get("ServerTemplate"));
	if (listTemplate != null && listTemplate.Value == "301")
		// found Posts list
		PostListName = list.Attribute(XName.Get("Title")).Value;
		PostListServerRelativeUrl = list.Attribute(XName.Get("DefaultViewUrl")).Value.Replace("/AllPosts.aspx", string.Empty);
	else if (listTemplate != null && listTemplate.Value == "302")
		// found Comments list
		CommentListName = list.Attribute(XName.Get("Title")).Value;

With the list names retrieved, I can query the lists for data. The web services use display names to identify lists.

Get SharePoint items with paging via web service

XDocument response = GetListItems(postsConfig);
	XElement root = response.Root;
	foreach (XElement row in root.Descendants(XName.Get("row", _z.ToString())))
		// parse data here
	XElement node = root.Descendants(XName.Get("data", _rs.ToString())).First();
	XAttribute nextNode = node.Attribute("ListItemCollectionPositionNext");
	if (nextNode != null)
		postsConfig.ListItemCollectionPosition = nextNode.Value;
		if (!string.IsNullOrEmpty(postsConfig.ListItemCollectionPosition))
			postsConfig.PageSize = node.Attribute("ItemCount").Value;
			response = GetListItems(postsConfig);
		postsConfig.PageSize = null;
		postsConfig.ListItemCollectionPosition = null;
} while (!string.IsNullOrEmpty(postsConfig.PageSize));

The method to actually query the web service for listitems. Properties of the class SharePointListConfig for the list title, ListItemCollectionPosition and Pagesize are simple string properties. The fields are specified, to get only the data we need for the migration.

private XDocument GetListItems(SharePointListConfig config)
	var xmlDoc = new XmlDocument();

	XmlNode ndQuery = xmlDoc.CreateNode(XmlNodeType.Element, "Query", "");
	XmlNode ndViewFields = xmlDoc.CreateNode(XmlNodeType.Element, "ViewFields", "");
	XmlNode ndQueryOptions = xmlDoc.CreateNode(XmlNodeType.Element, "QueryOptions", "");

	if (!string.IsNullOrEmpty(config.ListItemCollectionPosition))
		ndQueryOptions.InnerXml = string.Format("<IncludeMandatoryColumns>FALSE</IncludeMandatoryColumns><DateInUtc>TRUE</DateInUtc><Paging ListItemCollectionPositionNext=\"{0}\" />",
			config.ListItemCollectionPosition.Replace("&", "&amp;"));
		ndQueryOptions.InnerXml = "<IncludeMandatoryColumns>FALSE</IncludeMandatoryColumns><DateInUtc>TRUE</DateInUtc>";

	// get all comments and posts
	if (config.ListItemType == SharePointListConfig.ListType.Posts)
		// PostCatgory for SP Blog, BlogTitleForUrl and Categories for EBE Blogs
		ndViewFields.InnerXml = "<FieldRef Name='ID' /><FieldRef Name='Title'/><FieldRef Name='Body'/><FieldRef Name='PublishedDate'/><FieldRef Name='BlogTitleForUrl'/><FieldRef Name='Categories'/><FieldRef Name='PostCategory'/><FieldRef Name='Author'/>";
		ndViewFields.InnerXml = "<FieldRef Name='ID' /><FieldRef Name='Title'/><FieldRef Name='Body'/><FieldRef Name='PostTitle'/><FieldRef Name='CommentUrl'/><FieldRef Name='EmailAddress'/><FieldRef Name='Author'/><FieldRef Name='Created'/>";
		XmlNode ndListItems = _lists.GetListItems(config.GetListName(), null, ndQuery, ndViewFields, null, ndQueryOptions, null);
		XDocument response = XDocument.Parse(ndListItems.OuterXml);
		return response;
	catch (System.Web.Services.Protocols.SoapException ex)
		throw new Exception(ex.Message + Environment.NewLine + ex.Detail.InnerText, ex);

After all data has been read, local resources parsed and links replaced we move on to the destination side.

WordPress specific details

As stated above, I’ve use an existing library. There are plenty of samples out there, if you look for them. I’ve implemented the following methods.

public interface IWordpressXmlRpc
	WordpressFile newImage(string blogid, string username, string password, WordPressFile theImage, bool overwrite);

	MediaItem[] getMediaLibrary(string blogid, string username, string password, MediaFilter filter);

	bool deletePage(string blogid, string username, string password, int page_id);

	ExistingPostContent[] getRecentPosts(string blogID, string username, string password, int numberOfPosts);

	string newPost(string blogid, string username, string password, NewPostContent content, bool publish);

	bool editPost(string blogid, string username, string password, NewPostContent content, bool publish);

	bool deletePost(string blogid, string username, string password, int postid);

	int newComment(string blogid, string username, string password, int post_id, Comment comment);

	Comment[] getComments(string blogid, string username, string password, CommentFilter filter);

	bool editComment(string blogid, string username, string password, int comment_id, Comment comment);

	bool deleteComment(string blogid, string username, string password, int comment_id);

	string newTerm(string blogid, string username, string password, TaxonomyContent content);

	Term[] getTerms(string blogid, string username, string password, string taxonomy, TermFilter filter);

	bool deleteTerm(string blogid, string username, string password, string taxonomy, int term_id);

I would like to tell you some issues I had, so you don’t get the same problems I had programming with the WordPress XML RPC interface.

Post deletion

Just call the wp.deletePost method? Almost. You’ll have to call it twice to first move it to the recycle bin and then again to have posts being deleted permanently.

Media deletion

There is no method to delete items from the media gallery 🙁 Fortunately items within the gallery behave like pages. So if you implement an call the deprecated wp.deletePage interface, you can achieve what you want (remember to delete twice).

Categories and Tags

Both can be managed with the interface for terms the string for the parameter “taxonomy” will decide what to do. It can be “category” or “post_tag”.

Other than that, the WordPress API is pretty straight-forward and easy to use.


The download contains an executable, which is the tool itself, and a folder with the complete sourcecode.

Migrate SharePoint To WordPress

Using TLS with SmtpClient

A rather small change to your code can increase security by sending E-Mails via an encrypted connection.

Recently I stumbled across code, that send E-Mails with the System.Net.Mail.SmtpClient class. That piece of code did not try to communicated encrypted with the receiving SMTP server. So I changed that, to enable a TLS connection.

  var message = new MailMessage();
  _smtpClient.EnableSsl = true;
catch (SmtpException ex)
  // if the recpient mailserver does not support SSL, send without encryption
  _smtpClient.EnableSsl = false;

The change in my code was to enable TLS be default, and turn it off in case the receiving SMTP server does not support it. Everything else is untouched, which results in a small change in code to increase security.

I am catching the SmtpException that is thrown if TLS is unsupported. You can read more about the property and what it changes here:

Most of the time “old” code is worth a review with current knowledge. But I am sure you know that already 🙂


TLS with SharePoint

The solution for warming up SharePoint

Most SharePoint Farms will have a solution for the long loading time after an Application Pool recycle or iisreset running. There are many different ways to preload websites, so your users have faster load times. So why another solution?

There are some questions, that I think have not been dealt with before:

  • Most solutions require some sort of Timer to be started (e.g. a Scheduled Task)
  • When should the warmup occur?
  • What about multiple WebFrontend Servers?
  • How about Claims support?
  • Which URLs have to be called? What about extended WebApplications?
  • New WebApplications require the warmup tool to be adjusted
  • Manuell warmup after Deployments
  • What about search?
  • Did the Farm warmup correctly?

Years ago I developed a console application, which warms up SharePoint by calling each Site within a SiteCollection. It has bee updated multiple times with new features.

Basis of the new solution still is the “old” application. It has been integrated into the SharePoint Central Administration and the SharePoint Timer job. That way it can be configured through an Application Page and is executed by the SharePoint Timer on each WebFrontend Server. The Solution has been tested with SharePoint 2010 and 2013.

A Custom Action displays a new link within the “Monitoring” Section of the Central Administration.


All WebApplications are listed, and can be configured separately. The time the Application Pools recycles is read from IIS and will be set as default time (+ 1 minute). That way you can assure fast pages even shortly after the daily recycle.


A manual warmup can be started through the Timerjob page, or by downloading and executing a batch file (has be be executed on each farm server).

If you select to write to the EventLog, each execution of a job will write a summary to the Application Log. If all Websites could be loaded without a problem, the Event ID will be 0. Otherwise 1.


The tool supports Claims WebApplications with Windows Authentication.

The download package contains two WSPs. One for a SharePoint 2010 farm, and the other for 2013.

Download: Download WSP Packages, Sources

Update November 11, 2014

  • Please restart the SharePoint Timer service on all farmservers after intalling the solution

Building a Visual Studio project for SP2010/SP2013 (.NET 3.5/4.0)

In this post I will show you how you can use MSBuild to target your project for .NET 3.5 or .NET 4.0 and use a separate app.config file for each.

My Warmup Tool is supposed to work with SP2010 and SP2013. To achieve that compatibility, I have to change the TargetFramework of the project to be able to compile, as well as the app.config so the application uses the desired Framework. I didn’t want to change the values every time manually. An automated solution has to be possible. And it is. Some little changes to the project file and MSBuild will do all the work for you 🙂

So lets look into the default .csproj file, which sets the TargetFramework and references the app.config.

  1: <?xml version="1.0" encoding="utf-8"?>
  2: <Project DefaultTargets="Build" xmlns="http://..." ToolsVersion="4.0">
  3:   <PropertyGroup>
  4:     <Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>
  5:     <Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform>
  6:     ...
  7:     <TargetFrameworkVersion>v3.5</TargetFrameworkVersion>
  8:   ...
  9:   </PropertyGroup>

The version has to be exchanged, because SharePoint 2013 uses the .NET Runtime 4, but SP 2010 the Version 2. Trying to build the project with the wrong Target Framework will fail. The referenced SharePoint Assemblies depend on the correct Framework Version.

An easy way to specify the value depending on a condition, is to use the Constants, you can define on the build page within the project settings.


I use “SP2010” and “SP2013”, which can be used by MSBuild. You can change that value any time. Reloading the project is not necessary, as the Build-process picks up the value when it needs it.

Let’s get back to the TargetFramework. Switching the version depending a constant defined (“SP2013” in my case) is done with two new property groups in the csproj file of your project. I’ve included the lines below the debug/release Property Groups, because the DefineConstants property is defined there.

  1: <PropertyGroup Condition=" $(DefineConstants.Contains('SP2010')) ">
  2:   <TargetFrameworkVersion>v3.5</TargetFrameworkVersion>
  3: </PropertyGroup>
  4: <PropertyGroup Condition=" $(DefineConstants.Contains('SP2013')) ">
  5:   <TargetFrameworkVersion>v4.0</TargetFrameworkVersion>
  6: </PropertyGroup>

Remove the default entry, which is the one you see on line 7 of the previous code fragment above.

Now we are set up, and may compile the project for .NET 3.5 and .NET 4.0. Great. To have the application config include the supportedRuntime version as well, I’ve included two config files into my project.


The files are identical, except the value of the supportedRuntime, which is v2.0.50727 for .NET 3.5 and v4.0.30319 for .NET 4.0. Again MSBuild is your friend for using one or the other file depending the previously used constant “SP2010” or “SP2013”.



The switching condition can be specified like this:

  1: <ItemGroup Condition="$(DefineConstants.Contains('SP2010'))">
  2:   <None Include="Config\2010\app.config" />
  3: </ItemGroup>
  4: <ItemGroup Condition="$(DefineConstants.Contains('SP2013'))">
  5:   <None Include="Config\2013\app.config" />
  6: </ItemGroup>

The default entry for including the root-level app.config file has been removed from the csproj file.

As a result of the effort, I can build my console application for SharePoint 2010 and SharePoint 2013 by switching the constant in the project settings. The corresponding app.config file is used as well.

SQL Access to Configuration DB required

In many cases you pass an URL string to connect to SharePoint. In my case I wanted to verify the URL by using this code:

  1: Uri requestUri;
  2: if (!Uri.TryCreate(absoluteUrl, UriKind.Absolute, out requestUri))
  3:   throw new ArgumentException(absoluteUrl + " is no a valid URL.");
  5: SPWebApplication webApplication = SPWebApplication.Lookup(requestUri);

And here comes the “but”. I did not know that the account, which is executing the code, needs permissions to the Configuration Database!

So either grant permissions, or use something like this:

  1: using (var site = new SPSite(requestUri.AbsoluteUri))
  2: {
  3:   SPWebApplication webApplication = site.WebApplication;
  4: }

Happy SharePointing…

When a Feature gets installed

Have you ever thought about the Features folder and when a folder will be created for one of you features? Well, I did 🙂

Why is this relevant, anyway? To be able to activate a feature on a given scope, it has to be installed first. That’s why.

Action Result
stsadm -o addsolution The solution is added to the farm. Features are not available
stsadm -o deploysolution Feature folders are created and the Features are available for activation
stsadm -o installfeature A feature with ID xyz has already been installed in this farm.  Use the force attribute to explicitly re-install the feature.

Great. After deploying the solution, the feature is automatically installed and can be used. I did expect this, because installing a feature is a rather uncommon task.

Here comes another one. What if you add a feature to an existing – and deployed solution – and perform an upgrade?

Action Result
stsadm -o upgradesolution Adds the new feature folder
stsadm -o activatefeature Feature with Id ‘4520d607-699b-4025-b605-5f988c97b368’ is not installed in this farm, and cannot be added to this scope.

Ups. Did you expect the result? The feature has to be installed first!


If you add a feature to a solution, make sure the features gets installed prior usage! There are two ways

  1. Install the new feature
  2. Retract and Redeploy the solution

Activating Features after Solution Deployment via VS

Visual Studio allow a F5 Deployment. I guess you all know that. The part where you have to think carefully is, when you add Features to your project.

Should you activate “Activate On Default”? Well, it depends (as always). Usually I don’t enable that setting, because features tend to be activated on scopes you won’t expect.

The problem

Take a WebApplication scoped feature for example. It might create SafeControl entries for your controls. Do you really want them to be added to an Extranet WebApplication if your solution is solely for an Intranet Application?

The problem does not exist for you, if you auto activate your features and have set your deployment configuration to “Default”. But in my case, I use “No Activation” and “Activate On Default = false” most of the time. Then, after you deploy an update of your solution, SharePoint retracts and re-adds the solution. The consequence is a deactivate feature 🙂 (in case of Farm and WebApplication scoped features).

My solution

CKS rocks! What’s that have to do with this?

The CKS Extensions for Visual Studio ( can Update the solution like you would do via PowerShell or stsadm with a new Deployment option named “Upgrade Solution (CKSDev)”.


Unfortunately CKS isn’t available for Visual Studio 2013 preview. So I had to do something else to avoid the problem with non activated features after deployment.

Fortunately Microsoft provided some Command Line action for post-deployment. And since the SharePoint URL is known from the properties of the project, it can be used via variable $(SharePointSiteUrl). Combined with stsadm to activate the feature, I had all I needed.


So for now, Deployment from VS will work again 🙂

SPQuery for my tasks

Developing solution with multiple languages (or a language which is not English) sometimes can be a bit painful. To configure a Webpart to display only my tasks, I would filter for [Me] or [Ich].


To achieve the same via code / CAML, you can filter by UserID and not the string “Me”.

  1: <Where>
  2:   <Eq>
  3:     <FieldRef Name="AssignedTo" />
  4:     <Value Type="Integer">
  5:       <UserID />
  6:     </Value>
  7:   </Eq>
  8: </Where>
  9: <OrderBy>
 10:   <FieldRef Name="Status" />
 11:   <FieldRef Name="Priority" />
 12: </OrderBy>

This is just a reminder for me, so I can find the information more quickly. But maybe this is useful for some of you as well 🙂

Do long running operations on SPListItem creation

Events on SPListItems like ItemAdding or ItemAdded are nothing new. Many of you have already used them. Recently I had a requirement to create a new SPSite, when an item in a list has been created. So an ItemReceiver was my choice.

But the customer wants something special 🙂 During the creation process, which takes some seconds, the user should see a loading animation.


Here comes the problem. The ItemEventReceiver is running in the background, and has no knowledge about the GUI process. Well, at least if it is running asynchronous. A very good explanation can be found here: Using synchronous “after” events (e.g. ItemUpdated) in SharePoint 2010.

The short summary: If you use synchronous events, they get executed in the same thread and you have the HttpContext and SPContext!

You already tried this and didn’t have the context objects?



Here comes the trick…


Grab the objects in the constructor, store them and use later when you need them.

private SPContext _spContext;
private HttpContext _httpContext;
public EventReceiver()
	_spContext = SPContext.Current;
	_httpContext = HttpContext.Current;

public override void ItemAdded(SPItemEventProperties properties)
		if (_spContext == null)
			// item has been created via code from timerjob. no context and no need to redirect

		string url = SPUrlUtility.CombineUrl(_spContext.Web.ServerRelativeUrl, "_layouts...") + properties.ListItemId;
	catch (ThreadAbortException e)
		// occures if redirected

Ok. We’ve successfully redirected to another page in our creation page, which can be a modal dialog or full frame page. I’ve not been able to use the Page to start a SPLongOperation with. A NullReference Exception has been thrown. So my solution was another Layouts-Page, which then starts the SPLongOperation. When it is done, the Layouts-Page is closed. By closing it, the modal dialog from the Item Creation process also vanishes.

I’ve chosen the ItemAdded and not ItemAdding event, because ItemAdding did not like the redirect. The item did not get created.

The Layouts-Page uses SPLongOperation e.g. in CreateChildControls.

using (var longRunning = new SPLongOperation(Page))
   // do your long running operation here
   longRunning.End(null, SPRedirectFlags.Default, Context, null,
      "window.frameElement.commonModalDialogClose(1, null);");

After the operation has been executed, the SPLongOperation is ended and a script passed as last parameter in the End() method is executed. You don’t need script tags here.

To register the ItemEventReceiver to execute synchronously, use the following code.

SPList list = ...

// attach EventReceiver
var receiverDefinition = list.EventReceivers.Add();
receiverDefinition.Type = SPEventReceiverType.ItemAdded;
receiverDefinition.Assembly = "yourAssemblyFullName";
receiverDefinition.Class = "Your EventReceiver class (incl. namespace)";
receiverDefinition.Synchronization = SPEventReceiverSynchronization.Synchronous;


Basically that’s it. A combination of known tasks to create a new solution.


You can show a work-in-progress dialog to a user, when a new SPListItem is created. Here are the steps:

  1. Use a synchronous asynchronous event 🙂
  2. Redirect to a Layouts-Page
  3. Use the SPLongOperation class to do your work
  4. Have fun and happy customers

Now Available: Office Developer Tools for Visual Studio 2012

Finally! Now Available: Office Developer Tools for Visual Studio 2012

There are some points to mention, where the final release of the tools differ from previous preview releases:

  • validation experience that helps you to find and fix common errors prior to submitting your apps to the Office Store
  • A continuous integration workflow
  • Windows Azure cloud service projects for creating provider-hosted Apps
  • A dramatically improved Workflow designer

The download link:

Using .NET 4 with SharePoint 2013

A while ago, I wrote an article about performing operations parallel with SharePoint 2010 (.NET 3.5). –> Execute code in multiple threads (even with SharePoint)

Since I am not the only guy with this kind of “problems”, others are writing about SharePoint and .NET. Especially .NET 4.5 and SharePoint 2013.

Stephane Eyskens hat posted a nice 6-post series about .NET within SharePoint 2013.

If you stumble across more great resources, please leave a comment.

Caching objects with HttpRuntime

I won’t go into the arguments for using a caching mechanism or not. This post is simply an example for an easy way to cache data.

So if you want to store some object in the cache, you can do so very easy.

var localizedString = Caching.EnsureObject(resourceName,
                                           () => GetOperation(parameter));

As you can see, it really doesn’t matter what type of object the cache will store.

class Caching
   private static readonly TimeSpan Duration = new TimeSpan(1, 0, 0);

   /// <summary>
   /// return cached value, or add and return from cache
   /// </summary>
   /// <typeparam name="T"></typeparam>
   /// <param name="cacheKey"></param>
   /// <param name="getObject"></param>
   /// <returns></returns>
   internal static T EnsureObject<T>(string cacheKey, Func<T> getObject)
      var value = HttpRuntime.Cache[cacheKey];
      if (value != null)
         return (T) value;

      value = getObject.Invoke();

      if (value != null)
         var expiration = DateTime.UtcNow.Add(Duration);
         HttpRuntime.Cache.Insert(cacheKey, value, null, expiration, Cache.NoSlidingExpiration);
      return (T) value;

Adjust the duration, or pass it as parameter. Additionally you could pass another delegate for Exception-Handling. This example is meant as a reminder to think about caching again…

Execute code in multiple threads (even with SharePoint)

Since SharePoint 2010 uses .NET 3.5, you can not use the fancy new functions from .NET 4 🙂

So if we need e.g. multi-threaded execution of code, we’ll need to write the code ourselves. But, as you can see, this really isn’t so hard. The basic idea behind this solution of executing code parallel in threads, is that you have an IEnumerable<T> of some kind. This can be a List, or any other IEnumerable.

So let us for example take a list of Guids, which are the IDs of all SPWebs in a SiteCollection. Then we are iterating each web, and write the itemCount of all lists to the Console.

class ParallelExecutionTest
   private static int _overallItemCount;
   private static readonly object Lock = new object();

   public static void AddItemCount(int itemCount)
      lock (Lock)
         // only let one thread write to the setter 
         _overallItemCount += itemCount;

   public static void CountListitemsInAllWebs(string siteUrl)
      using (var site = new SPSite(siteUrl))
         // perform the method/action on any web in the sitecollection 
         site.AllWebs.Select(w => w.ID).EachParallel(webId =>
            CountListitems(site.ID, webId);
         }, Environment.ProcessorCount);
         Console.WriteLine("Overall Itemcount: " + _overallItemCount);

   private static void CountListitems(Guid siteId, Guid webId)
      // use new instances for each web 
      using (var site = new SPSite(siteId))
      using (var web = site.OpenWeb(webId))
         var itemCount = web.Lists.Cast<SPList>().Sum(list => list.ItemCount);
         Console.WriteLine("Web {0} has {1} items in all lists.", web.Title, itemCount);

That doesn’t look too complicated, does it? The little method EachParallel is all it takes for running the code in multiple threads. You have to decide if your code can run parallel, and if makes sense!

Note: Remember that SharePoint will most likely not work, if you access the same objects in multiple threads. So to be safe, create new instances of SharePoint objects in each Thread!

The sample above will create as much threads, as your system has CPUs. On my notebook with i7 and HyperThreading in 8 threads. And here comes the point to remember. Think carefully about the pitfalls on running your code parallel. Here are some drawbacks, compared to the sequentiell execution:

  • Overhead for creating new SharePoint objects (calls to the SQL server)
  • Additional load on the SQL server by querying more data simultaneously (think about a 4 processor server board with x cores and HyperThreading)
  • Possibly more load on the local SharePoint server by writing logfiles
  • Exception handling. With sequential code you can abort. Multiple threads keep running

Enough for now. Lets look at the Extension method which makes all this possible.

public static class Extensions
   /// <summary> 
   /// Enumerates through each item and start the action in a new thread 
   /// </summary> 
   /// <typeparam name="T"></typeparam> 
   /// <param name="enumerable"></param> 
   /// <param name="action"></param> 
   /// <param name="maxHandles">e.g. Environment.ProcessorCount</param> 
   public static void EachParallel<T>(this IEnumerable<T> enumerable, Action<T> action, int maxHandles)
      // enumerate the passed IEnumerable so it can't change during execution 
      var itemArray = enumerable.ToArray();
      var count = itemArray.Length;

      if (count == 0) return;
      if (count == 1)
         // if there's only one element, just execute 
         // maxHandles must not be greatet than the count of actions, or nothing will be done 
         if (maxHandles > count) maxHandles = count;
         var resetEvents = new ManualResetEvent[maxHandles];

         for (var offset = 0; offset <= count / maxHandles; offset++)
            EachAction(action, maxHandles, itemArray, offset, resetEvents);
            // Wait for all threads to execute 

   private static void EachAction<T>(Action<T> action, int maxHandles, IEnumerable<T> itemArray, int offset, ManualResetEvent[] resetEvents)
      int i = 0;
      foreach (var item in itemArray.Skip(offset * maxHandles).Take(maxHandles))
         resetEvents[i] = new ManualResetEvent(false);

         ThreadPool.QueueUserWorkItem(data =>
            var index = (int)((object[])data)[0];
               // Execute the method and pass in the enumerated item 
            catch (Exception ex)
               // Exception handling 

            // Tell the calling thread that we're done 
         }, new object[] { i, item });

All items in the IEnumerable are iterated. If there is free slot, the action will be executed in a new thread. There is no guarantee, that the code is executed in the same order, as the items in your IEnumerable. Here is an examples of IDs in an array, and the execution order:

Order in List Execution order
0 3
1 6
2 7
3 1
4 4
5 0
6 2
7 5

Summary: Depending on your code, and its requirements, multiple threads can be a good way to improve the speed of you code. It even can be a life-saver (thx Christopher!) for very long running operations. Take your time to think about it, before you implement the “little” change to your code to run in multiple threads!

One last word. I mentioned .NET 4 at the beginning. Here is a sample.

var ids = new List<int> { 0, 1, 2, 3, 4, 5 }; 
ids.AsParallel().ForAll(id => { Console.WriteLine("Id: " + id); });

Nice, ain’t it?

Major Update to the Fileserveraccess Web Part

In 2008 I’ve released a Web Part, which enables your users to access files on your fileservers through SharePoint. Original post. This Web Part has been downloaded many times. With this new version, I’ve tried to deal with the most asked questions (like Kerberos), which will make the Web Part easier to use. Naturally new features have been implemented, to get you to upgrade to the new version.

With this release, the Web Part requires SharePoint Foundation / Server 2010. For the users who are still using WSS V3, please stick to the old version, or upgrade your farm 🙂

First some screenshots, so you know what I am talking about.


image image


  • Download files from your fileservers via SharePoint
  • Download a folder with all containing files as zip-file
  • Upload files to a fileserver
  • Delete files from a fileserver
  • View the file properties
  • By default, the fileserver path has to be UNC. Local paths are not allowed, so a user cannot enter C:\ to access e.g. the web.config or other files on the local server
  • Multilanguage

Of coarse the access to the files is security trimmed. Meaning that if your users would not be able to access files with their logon from their client, they won’t be able from the Web Part!


For authorization against the fileserver, the credentials of the currently logged on user is used. For SharePoint (and any other application as well), it is necessary to configure the WebApplication which is hosting the Web Part to use Kerberos instead of NTLM. Otherwise a server cannot pass the user credentials forward to a second server. This is called the “Double-Hop problem”. To get a glimpse about the topic, take a look at an article I wrote some time ago. Configuring Kerberos for SharePoint. That post has been written for SharePoint V3! But thereimage are plenty of Kerberos Guides out there for SP 2010. And a whitepaper from Microsoft: Configuring Kerberos Authentication for SharePoint 2010 Products

If you do not configure Kerberos for the WebApplication, the Web Part will detect that, and show a notification in the properties section.

A small sidenote: If you are going to use local paths (meaning a folder on your SharePoint server), you can continue using NTLM. 

Another good starting point for Kerberos-Troubleshooting can be found here:



To use the WebPart, you’ll need to at least configure a path. Files – and subfolders – from that path will be shown.

Additionally, there are some properties, which modify features of the Web Part.

The Paging size defines, how many files are displayed on one page. With the next three checkboxes, you can allow files to be downloaded as zip, allow files to be uploaded and to be deleted.

I recommend to leave the caching activated. Deactivate only, if you have specific reasons, because there will be more todo for your SharePoint server and fileserver.

Using a local path as source

In case you want to use a local path as source for the Web Part, you have to allow the path to be used. To do so, follow the steps below.

  1. locate the feature.xml file ("C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\TEMPLATE\FEATURES\FileserverAccess\Feature.xml")
  2. Edit the file in your favorite editor
  3. Look for the property with the key “AllowLocalPaths” and modify the value to correspond to the driveletter you wish to use
    Replace “Driveletter” with e.g. “C”. You can specify more than one drive letters. In that case use a “;” as divider
  4. Save the feature.xml
  5. Restart your IIS (iisreset)

Remember that you’ll need to modify the file on all of your FrontEnd SharePoint Servers! After an upgrade of the Web Part, the file has to be modified again. If you do not allow local drives, the Web Part will show an error.


This version is compatible to the old version. So you can simply upgrade the solution and benefit of the new features!

Download the new version for SharePoint 2010 (Foundation and Server): RH.FileserverAccess.wsp

Download the old version for WSS V3 / MOSS 2007: RH.FileserverAccess.wsp

Update March 2012

  • I did not get the Web Part working in my claims based authentication test-environment. Additionally, the Web Part properties will show the current user and authentication method. If you see Negotiate, your environment is set up correctly (for classic authentication).


  • Another small update, which prevented the upload from working in Chrome

Update: WarmupScript

A long time ago, I posted a program which will hit all your sites. With parameters you can specify to hit all sites within a sitecollection.


This program has been updated. You can not omit a start Url, and specify “Farm” as parameter. This way, all sites in all sitecollections in all webapplications in all… 🙂 will be warmed up.

The warmup will use a HttpRequest to query all homepages. It will not hit every page in the pages libraries, but hitting each web is sufficient for most scenarios.

One thing to mention. If you want to warmup your Central Administration, you’ll have to call the program with the Url, as the CA will not be included in the webapplication enumeration of a SharePoint farm.


WarmupSharePoint http://your.server.url [AllSites] – will hit one site only, unless the AllSites parameter is specified. Then all sites will be dealed with.

WarmupSharePoint Farm – iterates through all sitecollections and hit all sites within

Download the program
Download the sourcecode

What to know about the feature folder

One of the first things I used to tell guys new to SharePoint development is: Never ever name the folder of your feature “Feature1”. If you create a solution with WSPBuilder, or did some time ago with VS 2008, you have to rename the folders immediately!


This is how a typical SharePoint project looks like, if you create features. I guess most of us have used the mighty WSPBuilder ( for developing with SharePoint.image

After building the VS solution and creating a WSP package with WSPBuilder, the wsp contains two folders. They reflect the names, we defined in VS.



Now lets take a look at the same features in a Visual Studio 2010 SharePoint Solution.


It almost looks the same as a WSPBuilder solution in VS 2008.

The features have been created by right-clicking on the Features folder in the Solution Explorer. This is important.

In many places VS uses tokens to replace strings with certain solution specific values like the assembly name. You can take a look at the tokens here: Replaceable Parameters

If we look at the wsp again, we notice the difference. Visual Studio 2010 hasimage added the solution name as prefix to the feature folders. Great. Thank you Microsoft. Now we can name our feature folders e.g. after the scope. (Site, Web, Webapplication of Farm), and do not have to worry about duplicate names.

The magic of this can be seen, if we take a look at the properties of the feature folders.


Conclusion: VS 2010 is a great improvement to us SharePoint developers. We don’t have to know all the places where it helps, but it can’t hurt, either. I hope this article brings a little light to the magic 🙂