.NET goes OpenSource!

.NET goes OpenSource!

Das sind doch mal hervorragende Neuigkeiten!

Microsoft hat im Rahmen seiner Online-Konferenz connect(); am Mittwoch bekannt gegeben, dass das komplette .NET Framework zukünftig als Open-Source-Software unter der MIT-Lizenz stehen wird.

Quelle: http://www.heise.de/newsticker/meldung/Microsoft-NET-wird-komplett-Open-Source-2452033.html

Das freut mich als Entwickler natürlich sehr. Dann können meine PI’s bald noch mehr durch Mono 🙂

Bisher wird per USB/RS232 der Stromzähler über die Infrarot-Schnittstelle ausgelesen 😉

Why I prefer WebApplication deployments over GAC deployments

This article is written with scope on SharePoint 2013. With SP 2013 the default TrustLevel in the web.config is set to “FullTrust”. On previous version the value is “WSS_Minimal”.

When you develop Farm-Solutions for SharePoint, you can deploy assemblies to the Global Assembly Cache (GAC) or configure the solution for a “bin-Deployment”.

The bin-way puts assemblies into the bin folder of the WebApplication, where the solution is deployed to.

You can switch the target of the assemblies by modifying the properties of a SharePoint project. The default value is GlobalAssemblyCache.

What does the changed property do to your solution?

Changing the value to WebApplication will deploy the assembly to the bin-directory of your IIS directory, as mentioned earlier. Because of the narrowed scope of the assembly, only the associated application pools needs to be recycled.

The classes you implemented, will be available only to your Website, which is fine in most cases. Assemblies within the GAC are available for all processes on the server.

In the past (prior SharePoint 2013) assemblies did not have FullControl permissions if they were deployed to the bin-directory. Instead Partial Trust was granted with the “WSS_Minimal” policy. A custom Code Access Security (CAS) had to be configured.

An advantage was the least privileges approach, a disadvantage the overhead of creating this CAS.

This is a list with advantages and disadvantages for bin-deployments. Pick your items and weight them the way you prefer.

My result was to deploy to bin, if possible. The faster development and deployment together with minimal impact on productive servers was worth it.

Advantages

  • Faster Deployment – With a GAC Deployment all application pools (incl. CentralAdministration) will be recycled
  • Less impact on other components

Disadvantages

  • TimerJobs need assemblies within the GAC
  • Using Feature Receivers, you’ll need to recycle the CentralAdministration application pool as well. Otherwise activating a Feature through the UI, could load the old assembly
  • Only one version of an assembly

What do you think? Why do you deploy to bin or to GAC?

Custom field and UpdateFieldValueInItem()

Recently I was developing a custom field. To store modified values, the UpdateFieldValueInItem method has to be overwritten.

In a normal way of clicking the submit/save button, the method is called and I can adjust the value for the field within the current item. The changes are submitted to the database later.

But what if you want to modify items outside of the current item? Sure, you can do so would you think. But you’ll need to consider the scenario that the user does not click submit/save. The method is called on every postback. The PeoplePicker will cause a postback, when it validates its values. There might be other controls as well, which behave this way.

My problem was that I could not modify items, other then the current, without checking if submit/save was clicked. I ended up checking the form for the control, that triggered the postback. If this value is “SaveItem”, I am good.

/// <summary>
/// Updates the underlying value of the field to the latest user-set value. 
/// </summary>
public override void UpdateFieldValueInItem()
{
	// do not trigger logic, if a post without submit occures. e.g. by PeoplePicker
	if (Context.Request.Form["__EVENTTARGET"].Contains("SaveItem"))

So if you need to know if the item is currently being saved or if you are within a regular postback, look at your form 🙂

Creating a lookup field via elements.xml

This is another post to help me remember. And as a reference for all of you, who cannot remember how to create a SPFieldLookup via XML.

<Field ID="{8b26ec41-b6c3-4327-0066-0c18c0768626}" Name="InternalName" StaticName="InternalName" DisplayName="Display Name" Type="Lookup" ShowField="Title" Mult="TRUE" List="Lists/LookupList" Overwrite="TRUE" />

When you provision a SPField via features, do not forget to add Overwrite=”TRUE”! Otherwise you’ll get an exception like this:

<nativehr>0x8107058a</nativehr><nativestack></nativestack>Fehler beim Binden des Inhaltstyps ‘0x010200C7A18EB120BB4A00892E9E1EE9481C9B0067E475B6FDD54048B347370871443CAD’ an die Liste ‘/sites/rhtest/Lists/LookupList’ für ‘http://rhdevsp2013/sites/rhtest’. Ausnahme ‘<nativehr>0x80070057</nativehr><nativestack></nativestack>’.

Unfortunately the MSDN is not very specific about the Overwrite property:

Optional Boolean. Specifies whether the field definition for a new field that is activated on a site (SPWeb) overwrites the field definition for an existing field, in cases where the new field has the same field ID as the existing field. True if the new field overwrites the existing field with the same field ID; otherwise false. The default is false.

Note, however, that if the existing field is read-only, or if it is sealed, then it will not be overwritten by the field that is being activated, even if this attribute is set to true

Hopefully I’ll think about this the next time a lookup field needs to be provisioned…

What is Dependency Injection?

This post will help you understand what DI (Dependency Injection) is, and how easy you can adopt the design patterns to create flexible and testable code. For a definition take a look at this Wikipedia article.

From the three types of Dependency Injection

  • constructor injection
  • setter injection
  • interface injection

this post covers the interface type.

The example will deal with a SharePoint list. This list can either be the posts list of a blog, or the comments list. The distinction is necessary, as both lists have different fields, which might be used for querying data later.

class Program
{
	static void Main(string[] args)
	{
		var list = new SharePointList {SharePointListType = SharePointList.ListType.Posts};
		Console.WriteLine("Posts:\t\t"+list.GetListTitle());

		list = new SharePointList{SharePointListType = SharePointList.ListType.Comments};
		Console.WriteLine("Comments:\t"+list.GetListTitle());
	}
}

class SharePointList
{
	internal enum ListType
	{
		Posts,
		Comments
	}

	private string _listTitle;

	public ListType SharePointListType { get; set; }

	public string GetListTitle()
	{
		switch (SharePointListType)
		{
			case ListType.Posts:
				_listTitle = GetListTitleFromSomewhere("Posts");
				break;
			case ListType.Comments:
				_listTitle = GetListTitleFromSomewhere("Comments");
				break;
			default:
				throw new ArgumentOutOfRangeException();
		}
		return _listTitle;
	}

	private string GetListTitleFromSomewhere(string listName)
	{
		// dummy
		return Guid.NewGuid().ToString();
	}
}

It produces two lines on the console, with different GUIDs as list titles.

What is the problem with this code? Why are we talking about Dependency Injection?

Look at the code above again, and consider this new requirement to your code:

I need the categories list as well..

To fulfill this new requirement, you’ll need to extend the enum and the code within the switch statement. Wouldn’t it be better to just change the Main method instead?

This is an example of the solution with an interface and implementations for each list type:

internal class Program
{
	private static void Main(string[] args)
	{
		var lists = new List<ISharePointList>
		{
			new PostsList(),
			new CommentsList(),
			new CategoriesList()
		};

		foreach (ISharePointList list in lists)
		{
			list.ConsoleWriteLine();
		}
	}
}

internal interface ISharePointList
{
	string GetListTitle { get; }
	void ConsoleWriteLine();
}

internal class PostsList : ISharePointList
{
	public string GetListTitle { get { return "Posts"; } }

	public void ConsoleWriteLine()
	{
		Console.WriteLine("Posts: " + GetListTitle);
	}
}

internal class CommentsList : ISharePointList
{
	public string GetListTitle { get { return "Comments"; } }

	public void ConsoleWriteLine()
	{
		Console.WriteLine("Comments: " + GetListTitle);
	}
}

internal class CategoriesList : ISharePointList
{
	public string GetListTitle { get { return "Categories"; } }

	public void ConsoleWriteLine()
	{
		Console.WriteLine("Categories: " + GetListTitle);
	}
}

As you can see, the main method now uses three lists. You could even use reflection to get all classes, that implement the interface “ISharePointList” to have even more dynamic within your code.

var lists = new List<ISharePointList>();
var currentAssembly = typeof(Program).Assembly;
var types = currentAssembly.DefinedTypes.Where(type => type.ImplementedInterfaces.Any(i => i == typeof (ISharePointList)));
foreach (TypeInfo typeInfo in types)
{
	lists.Add((ISharePointList) Activator.CreateInstance(typeInfo.AsType()));
}

Of course each list needs its own implementation. But you would need to implement different logic anyway.

Dependency Injection is to pass objects, no matter what exactly they are. They are resolved to their actual type later. After they have been injected into something. In this example the “foreach (ISharePointList list in lists){}” will resolve the actual type.

By implementing the interface you can pass completely different objects, if you need them e.g. for testing purpose. Testing SharePoint is not easy, and it might help to pass a dummy which returns something if the basic logic is working.

Migrate SharePoint Blog to WordPress

As promised here, this is a follow-up post with the tool I developed for the SharePoint to WordPress migration.

First, a screenshot:

Migrate SharePoint ot WordPress Screenshot

What is it, that we have to cover with a migration? Copying the posts is not enough. So I came up with this features:

Features

  • Copy posts
  • Copy comments
  • Copy resources like images and downloads
  • Create needed tags and categories
  • Modify links to local resource
  • deal with https, if links are absolute on the source blog and mixed with http
  • Using web services to connect to source and destination
  • URL rewriting (covered by a WordPress Plugin)
  • Delete all content from the destination blog (for migration testing)
  • Replace strings (with Regex)
  • a nice (WPF) GUI

Description

Originally I’ve build a plain console application. Then I thought that a console application would possibly scare some users. And after some time I wanted to do some WPF again. So I created a WPF application, to wrap all the functionality into a GUI. This way it will be easier to use for the folks out there, who do not like black console applications 😉 Since I am using web services to connect to both blogging platforms, the tool can be executed on any client computer. No access to a server session is required.

To start, you obviously need URLs to the source and destination blog, as well as credentials to the destination blog. Since most blogs are anonymous, you’ll probably not need to fill in the source credentials. The migration starts by hitting the “Migrate Content” button. That should be it for using the tool. It will remember the last entries for the URLs and login names, in case you need to perform multiple runs, which was the case for me. The passwords will need to be reentered for security reasons.

It’ll show the progress of all steps in a progress bar and text at the bottom of the application and tell you when it’s finished. Existing categories are mapped to new categories and used as tag, too. I’ve tested the tool with three blogs, one being my own with installed CKS:EBE. There really isn’t much more to configure, to have your blog being migrated to WordPress with this tool.

Some data needs to be modified, before the blog can go live on the new destination. In case of URLs this is necessary to generate valid links within the destination. Fortunately there is a plugin available to do some fancy rewriting. Since WordPress is showing its own smilies, I wanted to get rid of some strings within the posts, that reference smilies as images and replace them with, well, smilies. A txt file within the same directory with the name “replacestrings.txt” will take lines with strings for replacement.

<img.[^>]*/wlEmoticon-smile_2.png"(>| >|/>| />|</img>)*;#:-)
<img.[^>]*/wlEmoticon-sadsmile_2.png"(>| >|/>| />|</img>)*;#:-)
<img.[^>]*/wlEmoticon-winkingsmile_2.png"(>| >|/>| />|</img>)*;#:-)
http://www.hezser.de/_layouts/images/download.gif;# 

The sample will replace all my old smilie images with plain string before posts are created on the destination blog. The images that were used as smilies in the source, won’t be copied to the destination, because they are not referenced anymore. Otherwise I got many images with smilies. I like smilies 😀

You can stop reading here, if you are a user and would like to migrate your blog and download the tool. As a developer you might be interested on how the tool works…

Technical stuff

The tool gives me a good opportunity to explain some programming tasks, I used for the migration tool. I will explain some of them.

SharePoint offers web services (_vti_bin/lists.asmx), WordPress an XML RPC interface (I used CookComputing.XmlRpc to connect). Those two are used to connect to the blogs. Since the SharePoint web services need Displaynames to connect to the posts and comments list, I first queried them by list template.

Querying SharePoint for List Titles

Use the SharePoint lists web service, to get all lists of a site and search for specific lists like the posts and comments. The lists are identified by the used template. That way I do not have a localization issue.

_lists = new Lists
{
	Url = string.Format("{0}/_vti_bin/lists.asmx", BlogUrl),
	Credentials = CredentialCache.DefaultNetworkCredentials
};
XDocument response = XDocument.Parse(_lists.GetListCollection().OuterXml);
IEnumerable<XElement> lists = response.Root.Descendants(XName.Get("List", _s.ToString()));
foreach (XElement list in lists)
{
	XAttribute listTemplate = list.Attribute(XName.Get("ServerTemplate"));
	if (listTemplate != null && listTemplate.Value == "301")
	{
		// found Posts list
		PostListName = list.Attribute(XName.Get("Title")).Value;
		PostListServerRelativeUrl = list.Attribute(XName.Get("DefaultViewUrl")).Value.Replace("/AllPosts.aspx", string.Empty);
	}
	else if (listTemplate != null && listTemplate.Value == "302")
	{
		// found Comments list
		CommentListName = list.Attribute(XName.Get("Title")).Value;
	}
}

With the list names retrieved, I can query the lists for data. The web services use display names to identify lists.

Get SharePoint items with paging via web service

XDocument response = GetListItems(postsConfig);
do
{
	XElement root = response.Root;
	foreach (XElement row in root.Descendants(XName.Get("row", _z.ToString())))
	{
		// parse data here
	}
	XElement node = root.Descendants(XName.Get("data", _rs.ToString())).First();
	XAttribute nextNode = node.Attribute("ListItemCollectionPositionNext");
	if (nextNode != null)
	{
		postsConfig.ListItemCollectionPosition = nextNode.Value;
		if (!string.IsNullOrEmpty(postsConfig.ListItemCollectionPosition))
		{
			postsConfig.PageSize = node.Attribute("ItemCount").Value;
			response = GetListItems(postsConfig);
		}
	}
	else
	{
		postsConfig.PageSize = null;
		postsConfig.ListItemCollectionPosition = null;
	}
} while (!string.IsNullOrEmpty(postsConfig.PageSize));

The method to actually query the web service for listitems. Properties of the class SharePointListConfig for the list title, ListItemCollectionPosition and Pagesize are simple string properties. The fields are specified, to get only the data we need for the migration.

private XDocument GetListItems(SharePointListConfig config)
{
	var xmlDoc = new XmlDocument();

	XmlNode ndQuery = xmlDoc.CreateNode(XmlNodeType.Element, "Query", "");
	XmlNode ndViewFields = xmlDoc.CreateNode(XmlNodeType.Element, "ViewFields", "");
	XmlNode ndQueryOptions = xmlDoc.CreateNode(XmlNodeType.Element, "QueryOptions", "");

	if (!string.IsNullOrEmpty(config.ListItemCollectionPosition))
	{
		ndQueryOptions.InnerXml = string.Format("<IncludeMandatoryColumns>FALSE</IncludeMandatoryColumns><DateInUtc>TRUE</DateInUtc><Paging ListItemCollectionPositionNext=\"{0}\" />",
			config.ListItemCollectionPosition.Replace("&", "&amp;"));
	}
	else
	{
		ndQueryOptions.InnerXml = "<IncludeMandatoryColumns>FALSE</IncludeMandatoryColumns><DateInUtc>TRUE</DateInUtc>";
	}

	// get all comments and posts
	if (config.ListItemType == SharePointListConfig.ListType.Posts)
	{
		// PostCatgory for SP Blog, BlogTitleForUrl and Categories for EBE Blogs
		ndViewFields.InnerXml = "<FieldRef Name='ID' /><FieldRef Name='Title'/><FieldRef Name='Body'/><FieldRef Name='PublishedDate'/><FieldRef Name='BlogTitleForUrl'/><FieldRef Name='Categories'/><FieldRef Name='PostCategory'/><FieldRef Name='Author'/>";
	}
	else
	{
		ndViewFields.InnerXml = "<FieldRef Name='ID' /><FieldRef Name='Title'/><FieldRef Name='Body'/><FieldRef Name='PostTitle'/><FieldRef Name='CommentUrl'/><FieldRef Name='EmailAddress'/><FieldRef Name='Author'/><FieldRef Name='Created'/>";
	}
	try
	{
		XmlNode ndListItems = _lists.GetListItems(config.GetListName(), null, ndQuery, ndViewFields, null, ndQueryOptions, null);
		XDocument response = XDocument.Parse(ndListItems.OuterXml);
		return response;
	}
	catch (System.Web.Services.Protocols.SoapException ex)
	{
		throw new Exception(ex.Message + Environment.NewLine + ex.Detail.InnerText, ex);
	}
}

After all data has been read, local resources parsed and links replaced we move on to the destination side.

WordPress specific details

As stated above, I’ve use an existing library. There are plenty of samples out there, if you look for them. I’ve implemented the following methods.

public interface IWordpressXmlRpc
{
	[XmlRpcMethod("metaWeblog.newMediaObject")]
	WordpressFile newImage(string blogid, string username, string password, WordPressFile theImage, bool overwrite);

	[XmlRpcMethod("wp.getMediaLibrary")]
	MediaItem[] getMediaLibrary(string blogid, string username, string password, MediaFilter filter);

	[XmlRpcMethod("wp.deletePage")]
	bool deletePage(string blogid, string username, string password, int page_id);

	[XmlRpcMethod("metaWeblog.getRecentPosts")]
	ExistingPostContent[] getRecentPosts(string blogID, string username, string password, int numberOfPosts);

	[XmlRpcMethod("metaWeblog.newPost")]
	string newPost(string blogid, string username, string password, NewPostContent content, bool publish);

	[XmlRpcMethod("metaWeblog.editPost")]
	bool editPost(string blogid, string username, string password, NewPostContent content, bool publish);

	[XmlRpcMethod("wp.deletePost")]
	bool deletePost(string blogid, string username, string password, int postid);

	[XmlRpcMethod("wp.newComment")]
	int newComment(string blogid, string username, string password, int post_id, Comment comment);

	[XmlRpcMethod("wp.getComments")]
	Comment[] getComments(string blogid, string username, string password, CommentFilter filter);

	[XmlRpcMethod("wp.editComment")]
	bool editComment(string blogid, string username, string password, int comment_id, Comment comment);

	[XmlRpcMethod("wp.deleteComment")]
	bool deleteComment(string blogid, string username, string password, int comment_id);

	[XmlRpcMethod("wp.newTerm")]
	string newTerm(string blogid, string username, string password, TaxonomyContent content);

	[XmlRpcMethod("wp.getTerms")]
	Term[] getTerms(string blogid, string username, string password, string taxonomy, TermFilter filter);

	[XmlRpcMethod("wp.deleteTerm")]
	bool deleteTerm(string blogid, string username, string password, string taxonomy, int term_id);
}

I would like to tell you some issues I had, so you don’t get the same problems I had programming with the WordPress XML RPC interface.

Post deletion

Just call the wp.deletePost method? Almost. You’ll have to call it twice to first move it to the recycle bin and then again to have posts being deleted permanently.

Media deletion

There is no method to delete items from the media gallery 🙁 Fortunately items within the gallery behave like pages. So if you implement an call the deprecated wp.deletePage interface, you can achieve what you want (remember to delete twice).

Categories and Tags

Both can be managed with the interface for terms the string for the parameter “taxonomy” will decide what to do. It can be “category” or “post_tag”.

Other than that, the WordPress API is pretty straight-forward and easy to use.

Download

The download contains an executable, which is the tool itself, and a folder with the complete sourcecode.

Migrate SharePoint To WordPress

ChangePassword Webpart – new version available

The ChangePassword WebPart on CodePlex has been downloaded over 20.000 times. The new version has a couple of new features:

  • Easy Installation
  • SharePoint 2010 and 2013 (Foundation and Server)
  • Password strength indicator
  • Plugin support to extend functionality by custom code1
  • Warning if an unsecured connection is used
  • Copyright hint can be removed1
  • Auditing of password changes (and attempts)
  • Logging into the SharePoint logs

This is how it might look on your SharePoint:
ITaCS Password Changer
Documentation and downloads are available here.

A new home for this blog

After many years of SharePoint as blogging platform, I decided to move to WordPress. There are several reasons for the decision.

One would be that I want to get rid of my server at home.

Another is SharePoint and its blogging capabilities. As you probably know, I’ve worked on the CKS:EBE (Community Kit for SharePoint – Enhanced Blog Edition) blogging extension for SharePoint blogs some years ago. It is awesome to see that the default blog can be extended to such an extend. I’ve even made it compatible with SharePoint 2013. WordPress offers far more functionality with so many Plugins and Themes available.

And I wanted to try something different 😉

The migration process needed to respect all posts, comments, attachments/linked files and links. There was no tool that matches the requirements. So I developed my own. I will post other articles and the sourcecode later. Here is a small teaser of the WPF GUI, I put over the former console application.

Migrate SharePoint Blog to WordPress

Since the URL changed from https://www.hezser.de/blog/archive/2014/09/05/tfs-migration-from-on-premise-to-visual-studio-online.aspx to https://www.hezser.de/blog/2014/09/05/tfs-migration-from-on-premise-to-visual-studio-online, I had to think about redirection. Fortunately I am not the first person with the problem. The WordPress Plugin “Redirection” from John Godley does all that for me. The Regex “/blog/archive/(\d*)/(\d*)/(\d*)/(.*).aspx” matches old URLs and will redirect to “/blog/$1/$2/$3/$4”.

A couple of other plugins provide similar functionality.

So bye bye SharePoint for blogging and welcome WordPress. Btw: what do you think about the theme? It’s red now 🙂

TFS Migration from On-Premise to Visual Studio Online

Having a server at home that hosts a TFS is nice. But really necessary? Not really. So I decided to move all my sources to Visual Studio Online.

Visual Studio Online Basic is free for the first 5 users. That’s enough for me.

The migration process is straight forward easy with a tool that is installed on my computer. The process is described here. Another post can be found here.

image

During the pre migration steps a user mapping is performed, connections verified and a few other things. The process itself takes some time. I have 32MB in 4680 files and it took about half an hour. Another service on my server that isn’t needed anymore 🙂

Conclusion: Migration from On-Premise TFS (in my case 2012) to Visual Studio Online is very easy.

Using TLS with SmtpClient

A rather small change to your code can increase security by sending E-Mails via an encrypted connection.

Recently I stumbled across code, that send E-Mails with the System.Net.Mail.SmtpClient class. That piece of code did not try to communicated encrypted with the receiving SMTP server. So I changed that, to enable a TLS connection.

try
{
  var message = new MailMessage();
  _smtpClient.EnableSsl = true;
  _smtpClient.Send(message);
}
catch (SmtpException ex)
{
  // if the recpient mailserver does not support SSL, send without encryption
  _smtpClient.EnableSsl = false;
  _smtpClient.Send(message);
}

The change in my code was to enable TLS be default, and turn it off in case the receiving SMTP server does not support it. Everything else is untouched, which results in a small change in code to increase security.

I am catching the SmtpException that is thrown if TLS is unsupported. You can read more about the property and what it changes here: http://msdn.microsoft.com/en-us/library/system.net.mail.smtpclient.enablessl%28v=vs.110%29.aspx

Most of the time “old” code is worth a review with current knowledge. But I am sure you know that already 🙂

Update:

TLS with SharePoint

Useful JavaScript to know when working with SharePoint Display Templates

This post has some really great examples for JavaScript helper methods and available properties for working with Display Templates in SharePoint 2013.

http://dotnetmafia.com/blogs/dotnettipoftheday/archive/2014/02/26/useful-javascript-for-working-with-sharepoint-display-templates-spc3000-spc14.aspx

If you ever had to decide if your script is running on a SharePoint Foundation, use this one:

// Determine if the host is SharePoint Foundation
if (!Srch.U.isSPFSKU()) { }

SharePoint App Deployment fails

Visual Studio does not tell you much if an app deployment fails.

image

Fortunately SharePoint will log more information about the problem that occurred during the app deployment in the ULS-Log.

So if you run into the “There were deployment errors.” exception, take a look at the ULS-Log. In this particular case SharePoint didn’t like my JavaScript:

App Packaging: CreatePackage: Unexpected exception: There were errors when validating the App package: There were errors when validating the App Package. Other warnings / errors associated with this exception:  Custom action urls must start with "http:", "https:", "~appWebUrl" or "~remoteAppUrl".  The url "javascript:DoSomething();" is not in the right format.

Always to remember to look at the logfiles, if you have exceptions. They often contain more information 🙂

The solution for warming up SharePoint

Most SharePoint Farms will have a solution for the long loading time after an Application Pool recycle or iisreset running. There are many different ways to preload websites, so your users have faster load times. So why another solution?

There are some questions, that I think have not been dealt with before:

  • Most solutions require some sort of Timer to be started (e.g. a Scheduled Task)
  • When should the warmup occur?
  • What about multiple WebFrontend Servers?
  • How about Claims support?
  • Which URLs have to be called? What about extended WebApplications?
  • New WebApplications require the warmup tool to be adjusted
  • Manuell warmup after Deployments
  • What about search?
  • Did the Farm warmup correctly?

Years ago I developed a console application, which warms up SharePoint by calling each Site within a SiteCollection. It has bee updated multiple times with new features.

Basis of the new solution still is the “old” application. It has been integrated into the SharePoint Central Administration and the SharePoint Timer job. That way it can be configured through an Application Page and is executed by the SharePoint Timer on each WebFrontend Server. The Solution has been tested with SharePoint 2010 and 2013.

A Custom Action displays a new link within the “Monitoring” Section of the Central Administration.

image

All WebApplications are listed, and can be configured separately. The time the Application Pools recycles is read from IIS and will be set as default time (+ 1 minute). That way you can assure fast pages even shortly after the daily recycle.

image

A manual warmup can be started through the Timerjob page, or by downloading and executing a batch file (has be be executed on each farm server).

If you select to write to the EventLog, each execution of a job will write a summary to the Application Log. If all Websites could be loaded without a problem, the Event ID will be 0. Otherwise 1.

image

The tool supports Claims WebApplications with Windows Authentication.

The download package contains two WSPs. One for a SharePoint 2010 farm, and the other for 2013.

Download: Download WSP Packages, Sources

Update November 11, 2014

  • Please restart the SharePoint Timer service on all farmservers after intalling the solution

Building a Visual Studio project for SP2010/SP2013 (.NET 3.5/4.0)

In this post I will show you how you can use MSBuild to target your project for .NET 3.5 or .NET 4.0 and use a separate app.config file for each.

My Warmup Tool is supposed to work with SP2010 and SP2013. To achieve that compatibility, I have to change the TargetFramework of the project to be able to compile, as well as the app.config so the application uses the desired Framework. I didn’t want to change the values every time manually. An automated solution has to be possible. And it is. Some little changes to the project file and MSBuild will do all the work for you 🙂

So lets look into the default .csproj file, which sets the TargetFramework and references the app.config.

  1: <?xml version="1.0" encoding="utf-8"?>
  2: <Project DefaultTargets="Build" xmlns="http://..." ToolsVersion="4.0">
  3:   <PropertyGroup>
  4:     <Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>
  5:     <Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform>
  6:     ...
  7:     <TargetFrameworkVersion>v3.5</TargetFrameworkVersion>
  8:   ...
  9:   </PropertyGroup>

The version has to be exchanged, because SharePoint 2013 uses the .NET Runtime 4, but SP 2010 the Version 2. Trying to build the project with the wrong Target Framework will fail. The referenced SharePoint Assemblies depend on the correct Framework Version.

An easy way to specify the value depending on a condition, is to use the Constants, you can define on the build page within the project settings.

image

I use “SP2010” and “SP2013”, which can be used by MSBuild. You can change that value any time. Reloading the project is not necessary, as the Build-process picks up the value when it needs it.

Let’s get back to the TargetFramework. Switching the version depending a constant defined (“SP2013” in my case) is done with two new property groups in the csproj file of your project. I’ve included the lines below the debug/release Property Groups, because the DefineConstants property is defined there.

  1: <PropertyGroup Condition=" $(DefineConstants.Contains('SP2010')) ">
  2:   <TargetFrameworkVersion>v3.5</TargetFrameworkVersion>
  3: </PropertyGroup>
  4: <PropertyGroup Condition=" $(DefineConstants.Contains('SP2013')) ">
  5:   <TargetFrameworkVersion>v4.0</TargetFrameworkVersion>
  6: </PropertyGroup>

Remove the default entry, which is the one you see on line 7 of the previous code fragment above.

Now we are set up, and may compile the project for .NET 3.5 and .NET 4.0. Great. To have the application config include the supportedRuntime version as well, I’ve included two config files into my project.

image

The files are identical, except the value of the supportedRuntime, which is v2.0.50727 for .NET 3.5 and v4.0.30319 for .NET 4.0. Again MSBuild is your friend for using one or the other file depending the previously used constant “SP2010” or “SP2013”.

 

 

The switching condition can be specified like this:

  1: <ItemGroup Condition="$(DefineConstants.Contains('SP2010'))">
  2:   <None Include="Config\2010\app.config" />
  3: </ItemGroup>
  4: <ItemGroup Condition="$(DefineConstants.Contains('SP2013'))">
  5:   <None Include="Config\2013\app.config" />
  6: </ItemGroup>

The default entry for including the root-level app.config file has been removed from the csproj file.

As a result of the effort, I can build my console application for SharePoint 2010 and SharePoint 2013 by switching the constant in the project settings. The corresponding app.config file is used as well.

Speed up SharePoint Update Installation

Installing Updates for SharePoint 2013 will take a long time, if you don’t disable some services prior starting the update process by executing the hotfix exe file. To simplify the installation, and speeding it up, you can use a PowerShell script to stop the necessary services, and start the update.

http://blogs.msdn.com/b/russmax/archive/2013/04/01/why-sharepoint-2013-cumulative-update-takes-5-hours-to-install.aspx

You need to copy the code, and save it as e.g. Install_SharePoint_Update.ps1 in the same folder as the exe file. Start the script from the “SharePoint Management Shell”. It will take care of the services for you.

Btw: You can start the script with PowerShell on a remote path. No need to copy the update file to your SharePoint Servers 🙂

SQL Access to Configuration DB required

In many cases you pass an URL string to connect to SharePoint. In my case I wanted to verify the URL by using this code:

  1: Uri requestUri;
  2: if (!Uri.TryCreate(absoluteUrl, UriKind.Absolute, out requestUri))
  3:   throw new ArgumentException(absoluteUrl + " is no a valid URL.");
  4: 
  5: SPWebApplication webApplication = SPWebApplication.Lookup(requestUri);

And here comes the “but”. I did not know that the account, which is executing the code, needs permissions to the Configuration Database!

So either grant permissions, or use something like this:

  1: using (var site = new SPSite(requestUri.AbsoluteUri))
  2: {
  3:   SPWebApplication webApplication = site.WebApplication;
  4: }

Happy SharePointing…

When a Feature gets installed

Have you ever thought about the Features folder and when a folder will be created for one of you features? Well, I did 🙂

Why is this relevant, anyway? To be able to activate a feature on a given scope, it has to be installed first. That’s why.

ActionResult
stsadm -o addsolutionThe solution is added to the farm. Features are not available
stsadm -o deploysolutionFeature folders are created and the Features are available for activation
stsadm -o installfeatureA feature with ID xyz has already been installed in this farm.  Use the force attribute to explicitly re-install the feature.

Great. After deploying the solution, the feature is automatically installed and can be used. I did expect this, because installing a feature is a rather uncommon task.

Here comes another one. What if you add a feature to an existing – and deployed solution – and perform an upgrade?

ActionResult
stsadm -o upgradesolutionAdds the new feature folder
stsadm -o activatefeatureFeature with Id ‘4520d607-699b-4025-b605-5f988c97b368’ is not installed in this farm, and cannot be added to this scope.

Ups. Did you expect the result? The feature has to be installed first!

Conclusion

If you add a feature to a solution, make sure the features gets installed prior usage! There are two ways

  1. Install the new feature
  2. Retract and Redeploy the solution

CKS – Dev for Visual Studio 2012 and 2013

The new release brings support for Visual Studio 2013 🙂

The CKS – Development Tools Edition for Visual Studio 2012 and 2013 is a collection of Visual Studio templates, Server Explorer extensions and tools providing accelerated SharePoint 2010/2013 development based on Microsoft’s SharePoint 2010/2013 development tools.

http://visualstudiogallery.msdn.microsoft.com/cf1225b4-aa83-4282-b4c6-34feec8fc5ec?SRC=VSIDE

Activating Features after Solution Deployment via VS

Visual Studio allow a F5 Deployment. I guess you all know that. The part where you have to think carefully is, when you add Features to your project.

Should you activate “Activate On Default”? Well, it depends (as always). Usually I don’t enable that setting, because features tend to be activated on scopes you won’t expect.

The problem

Take a WebApplication scoped feature for example. It might create SafeControl entries for your controls. Do you really want them to be added to an Extranet WebApplication if your solution is solely for an Intranet Application?

The problem does not exist for you, if you auto activate your features and have set your deployment configuration to “Default”. But in my case, I use “No Activation” and “Activate On Default = false” most of the time. Then, after you deploy an update of your solution, SharePoint retracts and re-adds the solution. The consequence is a deactivate feature 🙂 (in case of Farm and WebApplication scoped features).

My solution

CKS rocks! What’s that have to do with this?

The CKS Extensions for Visual Studio (http://cksdev.codeplex.com/) can Update the solution like you would do via PowerShell or stsadm with a new Deployment option named “Upgrade Solution (CKSDev)”.

image

Unfortunately CKS isn’t available for Visual Studio 2013 preview. So I had to do something else to avoid the problem with non activated features after deployment.

Fortunately Microsoft provided some Command Line action for post-deployment. And since the SharePoint URL is known from the properties of the project, it can be used via variable $(SharePointSiteUrl). Combined with stsadm to activate the feature, I had all I needed.

image

So for now, Deployment from VS will work again 🙂

SPQuery for my tasks

Developing solution with multiple languages (or a language which is not English) sometimes can be a bit painful. To configure a Webpart to display only my tasks, I would filter for [Me] or [Ich].

image

To achieve the same via code / CAML, you can filter by UserID and not the string “Me”.

  1: <Where>
  2:   <Eq>
  3:     <FieldRef Name="AssignedTo" />
  4:     <Value Type="Integer">
  5:       <UserID />
  6:     </Value>
  7:   </Eq>
  8: </Where>
  9: <OrderBy>
 10:   <FieldRef Name="Status" />
 11:   <FieldRef Name="Priority" />
 12: </OrderBy>

This is just a reminder for me, so I can find the information more quickly. But maybe this is useful for some of you as well 🙂