Lately, I have been hearing about the difficulty of working with SharePoint remotely when you either only have the unique Id or the url of a document. Developers want get a list item based on its unique Id (guid) or Url and use a SharePoint out of the box web service. The problem with this scenario is that when using SharePoint web services the web services need context. Context refers to where the web service should make it’s request. For example, if I have the Id (guid) of a document and want to get its metadata via the lists web service, then I must at least have the server name, site name and document library name in order to call the web service. You can see this in the code below:
public static XmlNode GetListItemById(string guidId) string query = "<mylistitemrequest><Query><Where><Eq>"; XmlDocument doc = new XmlDocument(); listservice.Lists listProxy = new listservice.Lists(); listProxy.Url = "http://basesmcdev2/sites/tester1/_vti_bin/lists.asmx"; XmlNode queryNode = doc.SelectSingleNode("//Query"); XmlNode retNode = listProxy.GetListItems("tester2", string.Empty, queryNode, return retNode; |
As a developer it would be easier to work with a Url, then at least you have all the parts needed to make the web service call work, and to make the method more generic by being able to work with any Url that is passed to it. The big problem with working with SharePoint Urls is being able to get the different parts that represent the containers you want to work with. For example look at the Url below:
http://basesmcdev2/sites/tester1/tester2/heytest/25mb.pdf
So, looking at this Url you can easily pick out the server name, however, can you easily pick out the site name? How about the document name? Is tester2 a document library or a sub site? Is heytest a sub site or a folder?
In this case tester1 is the site name, tester2 is the document library name and heytest is the folder name. You will never figure this out via parsing or any other method. Fortunately, SharePoint has a web service called Webs and a method called WebUrlFromPageUrl which will return the Url to the web for a given full Url to a document. Using this web service call you then can construct a generic method which can break a SharePoint document Url into the parts that you need to successfully call the lists web service. Below is an example called TryGetSPUrlParts which is similar to other .Net framework methods that take reference parameters to return successful parsing and to return a bool to tell you whether it was successful. This method pattern enables you to use it in an “if” statement. The method calls the WebUrlFromPageUrl method and uses the returned web url to determine the other parts using Linq. The TryGetSPUrlParts returns the web url so you can use this to set the Url property of the web service proxy you want to call, secondly, it returns the list name that is needed to call any method on the Lists web service, and finally, the fileRef which is the file name along with any folder paths it is contained in. The file name and folders must be used when using the fileRef field in your caml query on the GetListItems method.
public static bool TryGetSPUrlParts(string fileUrlPath, Uri fileUrl = new Uri(fileUrlPath); try websProxy.UseDefaultCredentials = true; webUrl = websProxy.WebUrlFromPageUrl(fileUrlPath); List<string> containerParts = listName = containerParts.Take(1).First(). fileRef = fileUrl.AbsolutePath.Substring(1); } success = true; return success; |
So lets put the TryGetSPUrlParts to work with a method that calls the Lists web service GetListItems using any url passed into it. This example shows how to enable your code to work with any url passed in without hard coding any site, list or folder names.
public static string GetListItemUniqueIdByUrl(string fileUrlPath) string value = string.Empty; if (TryGetSPUrlParts(fileUrlPath, out webUrl, out listName, out fileRef)) XmlDocument doc = new XmlDocument(); listservice.Lists listProxy = new listservice.Lists(); listProxy.Url = webUrl + "/_vti_bin/lists.asmx"; XmlNode queryNode = doc.SelectSingleNode("//Query"); XmlNode retNode = listProxy.GetListItems(listName, XElement e = XElement.Parse(retNode.InnerXml); if (!string.IsNullOrEmpty(uniqueId)) } return value; |
What about SharePoint 2010?
So does SharePoint 2010 make it easier to work with Urls? With the new managed client object model it is much easier to work with Urls. The Client Object Model enables you as a developer to manipulate just about the same type of objects as you would working with server based code. The following method GetFileDetailsByUrl uses the Client Object Model along with a full Url of SharePoint document to get the properties of a file. The nice part about this is you no longer need to use Xml linq to pull the data out on the returned web service calls.
public static void GetFileDetailsByUrl(string fileUrlPath) Uri fileUrl = new Uri(fileUrlPath); ClientContext clientContext = File file = clientContext.Web.GetFileByServerRelativeUrl( clientContext.Load(file); DateTime modifiedDate = file.TimeLastModified; } |
So you can see that by leveraging other SharePoint web services in 2007 you can create more generic re-usable code that works with other SharePoint web services. Many of the out of the box web services in 2007 and 2010 compliment each other, and by combining them you can create a framework to work with SharePoint remotely. Finally, with SharePoint 2010 and it’s new Client Object Model you will be empowered to easily create remote SharePoint applications.
3 comments:
Great post Steve.
Thanks Steve.. This post has helped me a lot..
Thanks Dude. I'd never have worked this out
Post a Comment