tag:blogger.com,1999:blog-38263059380881283202024-03-13T03:03:31.849-07:00SharePoint Field NotesSteve Curranhttp://www.blogger.com/profile/08379275170889570527noreply@blogger.comBlogger104125tag:blogger.com,1999:blog-3826305938088128320.post-22929699246438960182017-11-22T21:35:00.000-08:002017-11-22T21:46:40.446-08:00Run Kubernetes Locally with Hyper-V on Windows Server 2016<div id="scid:77ECF5F8-D252-44F5-B4EB-D463C5396A79:3caeae8e-e9f5-4ef5-907a-5344fc3a1a0b" class="wlWriterEditableSmartContent" style="float: none; padding-bottom: 0px; padding-top: 0px; padding-left: 0px; margin: 0px; display: inline; padding-right: 0px">Technorati Tags: <a href="http://technorati.com/tags/Containerization" rel="tag">Containerization</a>,<a href="http://technorati.com/tags/Kubernetes" rel="tag">Kubernetes</a>,<a href="http://technorati.com/tags/Docker" rel="tag">Docker</a>,<a href="http://technorati.com/tags/Windows" rel="tag">Windows</a></div> <p>Recently I have been working on a project which requires the use of Docker containers and the ability to scale out with services. Docker itself offers the ability to do this using Docker swarm. However, the development community overwhelmingly supports containerization using Kubernetes. Kubernetes is Google’s containerization platform that offers a rich set of features for scaling containerized applications. I like to do all my development on a multiple Windows virtual machines using Hyper-V. Some days I can have up to three virtual machines running with three different projects . Doing Docker development required Windows Server 2016 and I had no problem testing Docker swarm across multiple virtual machines. Trying to develop with Kubernetes locally on a Windows Server 2016 virtual machine was a different story. There is not a lot of information on how to accomplish this, in fact I could not come across any. Most of the information was doing it using Kubernetes Minikube on Windows 10 but not in a virtual machine. This post will give you all the steps you need to get Kubernetes Minikube to run on a Windows Server 2016 Hyper-V virtual machine.</p> <h2>Minikube</h2> <p>Google’s Minikube is the Kubernetes platform you can use to develop and learn Kubernetes locally. You can basically do anything with Minikube as you could if you were running it in Azure except load balance and scale out across nodes. <a href="https://kubernetes.io/docs/getting-started-guides/minikube/#minikube-features" target="_blank">Learn more about Minikube here</a>. Minikube is run in its own virtual machine that runs Linux. So when you want to use it on a windows virtual machine you must enable Hyper-V. You are nesting virtual machines. So here are the steps you need to take to get Minikube to run reliably on your Windows Server 2016 Hyper-V virtual machine.</p> <h3> </h3> <h2>Steps</h2> <blockquote> <h3>Configure Nested Virtualization</h3></blockquote> <blockquote> <p>The first thing you must do is enable nested virtualization on the Windows Server 2016 virtual machine you want to run Minikube on. You must do this from the host using PowerShell.</p><pre class="brush:powershell">Set-VMProcessor -VMName <vmname> -ExposeVirtualizationExtensions $true </pre></blockquote>
<blockquote>
<h3>Install Hyper-V Feature on the Windows Server 2016 Virtual Machine</h3></blockquote>
<blockquote>
<h3>Download the Latest Version of KubeCtl and Minikube for Windows</h3></blockquote>
<blockquote>
<p><a href="https://storage.googleapis.com/kubernetes-release/release/v1.8.0/bin/windows/amd64/kubectl.exe" target="_blank">KubeCtl</a></p>
<p><a href="https://github.com/kubernetes/minikube/releases" target="_blank">Minikube</a></p>
<p>Copy both to the root C drive or the root drive you will be running PowerShell from. Rename the Minikube executable to Minikube. This will make it easier for you to execute Minikube commands in PowerShell.</p></blockquote>
<blockquote>
<h3>Create a Virtual Switch for Minikube</h3></blockquote>
<blockquote>
<p>Create a virtual switch to be used with the Minikube virtual machine using PowerShell</p><pre class="brush:powershell">New-VMSwitch -Name VmNAT -SwitchType Internal
</pre></blockquote>
<h3> </h3>
<blockquote>
<h3>Assign an IP Address to the Virtual Switch Adapter</h3></blockquote>
<blockquote><pre class="brush:powershell">Get-NetAdapter "vEthernet (VmNat)" | New-NetIPAddress -IPAddress 192.168.137.1 -AddressFamily IPv4 -PrefixLength 24
</pre></blockquote>
<h3> </h3>
<blockquote>
<h3>Configure the Ethernet Microsoft Hyper-V Network Adapter</h3>
<p>Internet Protocol 4 enabled <br>Hyper-V Extensible Virtual Switch disabled<br>Internet Protocol 4 has 8.8.8.8 as the preferred DNS server</p>
<p>Share this adapter with the virtual switch adapter setup for Minikube. This is found in the sharing tab.</p>
<h3>Configure the Virtual Switch Ethernet Adapter </h3>
<p>Internet Protocol 6 disabled</p>
<h3>Reboot the Virtual Machine</h3>
<h3>Start Minikube</h3>
<p>Start Powershell in administrator mode. Change the directory to the Root drive where Minikube.exe is located. Run the following Powershell.<br> </p></blockquote>
<blockquote><pre class="brush:powershell">.\minikube start --kubernetes-version="v1.8.0" --vm-driver="hyperv" --hyperv-virtual-switch="VmNAT" --v=7 --alsologtostderr
</pre></blockquote>
<blockquote>
<p>If all works you will see the message <strong>“kubectl is now configure to use the cluster”</strong></p>
<p>To make sure all is working execute the following powershell</p>
<p> </p></blockquote>
<blockquote><pre class="brush:powershell">.\kubectl get pods --all-namespaces
</pre></blockquote>
<blockquote>
<p>If all is working you should see something like this.</p>
<p><img src="https://c1.staticflickr.com/5/4580/37705444765_7af1a62524_o.png"></p></blockquote>
<h2>Minikube and Windows Containers</h2>
<p>So once this is all up an going you should be happy. Unfortunately, Minikube does not work with Windows containers yet. You can still run other linux containers to test and learn. Kubernetes and Windows containers are only supported in Azure. Another aggravation is when you either stop and restart Minikube or reboot the virtual machine, then Minikube stops working. It can no longer obtain an IPv4 address. This is from a recent bug in Windows Server 2016. You must disable and re-enable sharing the Ethernet adapter with the virtual switch adapter. Big pain. Below is a PowerShell script you can use at startup or just execute it after one of these scenarios. So I hope you find this useful. It took me a long time to get this to work consistently. I can only hope Minikube will support windows containers and this bug gets fixed.</p><pre class="brush:powershell"># Register the HNetCfg library (do only once)
regsvr32 hnetcfg.dll
$m = New-Object -ComObject HNetCfg.HNetShare
$c1 = $m.EnumEveryConnection |? { $m.NetConnectionProps.Invoke($_).Name -eq "Ethernet" }
$c2 = $m.EnumEveryConnection |? { $m.NetConnectionProps.Invoke($_).Name -eq "vEthernet (VmNAT)" }
$config1 = $m.INetSharingConfigurationForINetConnection.Invoke($c1)
$config2 = $m.INetSharingConfigurationForINetConnection.Invoke($c2)
$config1.DisableSharing()
$config2.DisableSharing()
$config1.EnableSharing(0)
$config2.EnableSharing(1)
</pre>Steve Curranhttp://www.blogger.com/profile/08379275170889570527noreply@blogger.com0tag:blogger.com,1999:blog-3826305938088128320.post-72826341512806624242017-03-06T20:14:00.001-08:002017-03-06T20:14:22.941-08:00Tips for Better Content Searching in O365 Security and Compliance<div id="scid:77ECF5F8-D252-44F5-B4EB-D463C5396A79:9fd52b58-a95a-44cf-a62b-bb58b8e86942" class="wlWriterEditableSmartContent" style="float: none; padding-bottom: 0px; padding-top: 0px; padding-left: 0px; margin: 0px; display: inline; padding-right: 0px">Technorati Tags: <a href="http://technorati.com/tags/O365" rel="tag">O365</a>,<a href="http://technorati.com/tags/SharePoint" rel="tag">SharePoint</a>,<a href="http://technorati.com/tags/ECM" rel="tag">ECM</a>,<a href="http://technorati.com/tags/Search" rel="tag">Search</a></div> <table cellspacing="0" cellpadding="2" width="400" border="0"> <tbody> <tr> <td valign="top" width="200"><img src="https://c1.staticflickr.com/1/570/32454423774_e7feb0c7b0_o.png"></td> <td valign="top" width="200"><img src="https://c1.staticflickr.com/1/635/33297361515_d6e7145070_o.png"></td></tr></tbody></table> <p>One of the new features in O365 Security and Compliance Center is the new Search and Investigation section. In this section you can do Audit log searches, eDiscovery cases and set up large content searches. Content searching allows an eDiscovery manager to search Exchange mail boxes, SharePoint Online sites and OneDrive for Business folders. This features enables large searches and allows manager to export the results up to 2TB. Quite impressive. I recently had the chance to use this feature and found a few problems with it. This post will give you some tips on how to use this feature more effectively.</p> <h3>Why Can’t I Use Custom Managed Properties?</h3> <p>Many companies when migrating to Office 365 have content types that have custom site columns. These site columns get auto generated managed properties during the search crawl. So you may want to do a content search using one of them from the content search UI. This UI allows users to create, edit, bulk edit and delete searches. When you create or edit a search you are offered the choice to put a whole KQL query or just some keywords in the large text box. You can also combine the keywords with some conditions. The condition builder only allows combining the conditions with AND and is limited to a select group of managed properties. I recommend not trying to combine conditions with a complete KQL query. The biggest problem is you get an error message if you try to use one of the custom managed properties.</p> <p><img src="https://c2.staticflickr.com/4/3886/32454423754_e32a63bfba_o.png"></p> <p><img src="https://c1.staticflickr.com/1/752/32454423664_317073840b_b.jpg"></p> <p>If you click OK to continue, then Office 365 will execute the query, but it strips out the part of the KQL query containing the forbidden managed property. In the case above the search returns everything but still lists the forbidden KQL in the Query.</p> <h3>Always Use the Refinable Managed Properties</h3> <p>In order to use any custom managed properties make sure to map the corresponding crawled property to a compatible refinable managed property. Don’t bother with naming an alias to make your KQL queries more readable because they don’t work either. Using a built-in refinable managed property gets rid of the error message and the query executes correctly.</p> <p><img src="https://c1.staticflickr.com/1/743/32454423714_e232561a8c_o.png"></p> <p>If your doing large content searches that you want to export the files then this is the tool for you. Unfortunately, this tool does not make it easy for companies to leverage their custom metadata to do targeted searching. Hopefully, this can be fixed soon.</p>Steve Curranhttp://www.blogger.com/profile/08379275170889570527noreply@blogger.com0tag:blogger.com,1999:blog-3826305938088128320.post-87981036781993167312017-02-19T20:19:00.001-08:002017-02-19T20:19:54.285-08:00Multiple File Deployment for SharePoint Add-In Development with SPFastDeploy 3.7<div id="scid:77ECF5F8-D252-44F5-B4EB-D463C5396A79:964731a5-0cd8-4448-b157-d8a4277b76f1" class="wlWriterEditableSmartContent" style="float: none; padding-bottom: 0px; padding-top: 0px; padding-left: 0px; margin: 0px; display: inline; padding-right: 0px">Technorati Tags: <a href="http://technorati.com/tags/SP2016" rel="tag">SP2016</a>,<a href="http://technorati.com/tags/SP2013" rel="tag">SP2013</a>,<a href="http://technorati.com/tags/Apps" rel="tag">Apps</a>,<a href="http://technorati.com/tags/Add-In" rel="tag">Add-In</a>,<a href="http://technorati.com/tags/Dev" rel="tag">Dev</a>,<a href="http://technorati.com/tags/O365" rel="tag">O365</a></div> <p>The <strong><a href="https://marketplace.visualstudio.com/items?itemName=SteveCurranMVP.SPFastDeploy" target="_blank">SPFastDeploy</a></strong> Visual Studio extension allows you to make changes to a single file in your Add-In solution. Just right click on the file and have the changes automatically deployed without having to re-deploy the whole application. You won’t lose all your previously loaded list data and you won’t have to add your client app part back to the web part page. Saves a lot of time when developing Add-In model solutions.</p> <p>Well recently I was working on some large SharePoint Add-In projects and I needed to upgrade some JavaScript libraries. I noticed how tedious it was to right click on the item in the solution explorer and click <strong>“Fast Deploy to SP App”</strong> for each upgraded file. So I added the ability to select multiple files and deploy to your SharePoint Add-In solution when developing. Make sure to keep the debug output window open to make sure all the files you selected were deployed. Enjoy saving more time when writing code.</p> <p><img src="https://c1.staticflickr.com/1/671/32982160035_c1b86e08b6_o.png"></p> <p><img src="https://c1.staticflickr.com/3/2681/32827392522_da4181f18c_b.jpg"></p>Steve Curranhttp://www.blogger.com/profile/08379275170889570527noreply@blogger.com1tag:blogger.com,1999:blog-3826305938088128320.post-69206256157406190792016-09-24T06:35:00.001-07:002016-09-24T06:35:02.358-07:00Better SharePoint Framework Code with SPRemoteAPI 1.5 VSCode Extension<div id="scid:77ECF5F8-D252-44F5-B4EB-D463C5396A79:161bc46b-39c3-41f7-b981-251e10307cb4" class="wlWriterEditableSmartContent" style="float: none; padding-bottom: 0px; padding-top: 0px; padding-left: 0px; margin: 0px; display: inline; padding-right: 0px">Technorati Tags: <a href="http://technorati.com/tags/SharePoint+Framework" rel="tag">SharePoint Framework</a>,<a href="http://technorati.com/tags/VSCode" rel="tag">VSCode</a>,<a href="http://technorati.com/tags/O365" rel="tag">O365</a>,<a href="http://technorati.com/tags/Dev" rel="tag">Dev</a></div> <p>I have been working with the new SharePoint Framework for the last month. This is the new Microsoft framework for SharePoint developers which makes it incredibly easily create web parts and web page applications using Typescript and other web technologies. I highly recommend switching to VSCode from Visual Studio when using this framework since it has greater flexibility for working with open source web technologies. The new framework relies heavily on the use of Typescript which gives you the ability to generate JavaScript code with compile time type checking along with intellisense to see available properties and methods while typing code.</p> <p><a href="https://marketplace.visualstudio.com/items?itemName=SteveCurran.spremoteapi" target="_blank">SPRemoteAPI VSCode Extesnion</a></p> <p>If you have ever worked with the SharePoint/O365 REST API you know that it is not easy to discover what types, methods and properties are available to you. When writing Typescript code you should always define your interfaces for REST responses and requests. The interfaces enable you to declare types which enables intellisense and type checking. Usually there are Typescript declaration files available for many framework libraries. The SharePoint REST API is not one of them. There are Typescript declarations for the SharePoint JavaScript Object Model but they are not kept up to date and do not match the REST API. It can be incredibly tedious to find REST example code and then go and create an interface for the request and response. Many developers will just create an interface and do some mapping from the REST response to a custom interface. This involves making the REST call and then examining the response to determine how to do the mapping. </p> <h4>SPRemoteAPI 1.5 Supports Creating Typescript Interfaces</h4> <p>Well I got tired of the error prone and laborious process of creating interfaces for the SharePoint REST API. I wanted something that would generate the interface for me that would automatically and exactly map to the response or request for a REST call. Many of the responses and request objects have properties that expose other complex types, so I wanted something that would create all the interfaces required to handle a response and request. So all you have to do is invoke the extension, then search for the type and click the <strong>“Create Interface”</strong> button. </p> <p><img src="https://c1.staticflickr.com/9/8070/29240483853_c7ed061603_o.png"></p> <p>After clicking the button all the interfaces that are exposed by the this type are created and put into the virtual declaration file. Since this file is virtual, you can then just copy whatever you want from this to your own declaration file. The <strong>“Create Interface”</strong> button is only visible on types that have properties. All properties are optional giving you the ability to choose which properties you use. </p> <p><img src="https://c2.staticflickr.com/6/5526/29240480373_1be0754218_o.png"></p> <p>Once you have added this to a Typescript file and added an import statement you can now declare a type and get type checking along with intellisense for SharePoint REST response or request.</p> <p><img src="https://c1.staticflickr.com/9/8348/29753385412_3e0f7201e4_o.png"></p> <p><img src="https://c2.staticflickr.com/6/5609/29753377662_84b0645e03_o.png"></p> <h4>Get Productive with Intellisense and Type Checking</h4> <p>This feature will save you a lot of time having to research REST requests and responses. It will help you get up to speed on the SharePoint REST API and using the new SharePoint Framework. Enjoy.</p>Steve Curranhttp://www.blogger.com/profile/08379275170889570527noreply@blogger.com0tag:blogger.com,1999:blog-3826305938088128320.post-39195315691961253992016-07-04T09:27:00.001-07:002016-07-04T09:40:08.145-07:00Visual Studio Code Extension–SPRemoteAPI (SharePoint Office 365 REST API Discovery for the Masses)<div id="scid:77ECF5F8-D252-44F5-B4EB-D463C5396A79:a14258c6-49d0-41db-8fc6-655db548dc84" class="wlWriterEditableSmartContent" style="float: none; padding-bottom: 0px; padding-top: 0px; padding-left: 0px; margin: 0px; display: inline; padding-right: 0px">Technorati Tags: <a href="http://technorati.com/tags/VSCode" rel="tag">VSCode</a>,<a href="http://technorati.com/tags/O365" rel="tag">O365</a>,<a href="http://technorati.com/tags/SharePoint" rel="tag">SharePoint</a>,<a href="http://technorati.com/tags/Dev" rel="tag">Dev</a></div> <p>For everyone who has ever used the Visual Studio extension <a href="https://visualstudiogallery.msdn.microsoft.com/26a16717-0c9a-4367-8dfd-bb09e7e2deb5" target="_blank">SPRemoteAPIExplorer</a> one of the pain points is the requirement that you had have SharePoint installed on the same box as Visual Studio. Now you can get the same basic functionality of being able to browse and discover what is available in SharePoint and Office 365 REST and JavaScript API without having to have SharePoint installed. You can type in a namespace and browse the methods and properties. You can determine if they are available to be used with REST and JavaScript. You can also see what is new in SharePoint 2016. Finally when looking at specific methods or functions the extension gives the required POST bodies and response payloads, allowing the developer to easily copy and paste this into his code. You can get the VSCode extension here <a href="https://marketplace.visualstudio.com/items?itemName=SteveCurran.spremoteapi" target="_blank">SPRemoteAPI</a> in the Visual Studio Market Place. Oh did I mention it is <strong>FREE</strong>.</p> <h4>Using SPRemoteAPI Extension (Step 1)</h4> <p>In VSCode just hit <strong>F1</strong> and start typing SPRemoteAPI and you will see it appear in the drop down list.</p> <p><img src="https://c2.staticflickr.com/8/7631/28047195465_b8a1b7d59d_o.png"></p> <p> </p> <h4>Using SPRemoteAPI Extension (Step 2)</h4> <p>After selecting the SPRemoteAPI command you will be presented with all the available types in the SharePoint Office 365 remote API. You can start typing and the list will automatically filter as you type. The example below shows typing <strong>“move” </strong>and the list is filtered down to types with the word move contained in them. This list shows a github icon (<strong>flame</strong>) next to types that are new in SharePoint 2016. It also lists whether the type can be used in REST or JavaScript. Some types are not available for both.</p> <p><img src="https://c2.staticflickr.com/8/7117/27433483154_a2c1cae464_b.jpg"></p> <h4>Using SPRemoteAPI Extension (Step 3)</h4> <p>Once you selected the type you are presented a information dialog showing you the type along with options for displaying properties and methods. The options also shows you the number of each contained in the type. </p> <p><img src="https://c2.staticflickr.com/8/7334/28047195335_1eaf28fdd8_c.jpg"></p> <h4>Using SPRemoteAPI Extension (Step 4)</h4> <p>Select the methods options and you are presented a list of available methods to choose from.</p> <p><img src="https://c2.staticflickr.com/8/7390/28047195515_d8e37fb564_o.png"></p> <h4>Using SPRemoteAPI Extension (Step 5)</h4> <p>Choose a method and a new code window (virtual document) opened containing a JSON representation of all the method’s information needed to call it remotely using REST. It shows you the parameter types, required post body and response body. The post body can be copied into whatever REST calling framework you are using such as FETCH or JQuery. The response can be used to guide you in what to expect in the payload response from the call. This gives you ability to write remote REST calls without having to do all the extra experimentation to see what the call returns. Having both the body and response JSON templates will save you a lot of time searching on the internet.</p> <p><img src="https://c2.staticflickr.com/8/7336/27969723211_275dfeb3a5_b.jpg"></p> <h4>What about properties?</h4> <p>Below is example of the code window you are given when you select properties. It shows you all the available properties for the type and the information you need to determine what is available remotely from SharePoint Office 365. </p> <p><img src="https://c2.staticflickr.com/8/7436/27969745441_2a79474539_b.jpg"></p> <h4>SPRemoteAPI in Action</h4> <p><img src="https://c2.staticflickr.com/8/7458/27907653992_724924a479_o.gif"></p> <h4> </h4> <h4>SPRemoteAPI VSCode Extension – SharePoint Office 365 REST Discovery at your fingertips</h4> <p>This Visual Studio Code extension was created to open up the SharePoint Office 365 remote API to the many developers who do not use Visual Studio to develop. This extension will run on non-windows environments and of course does not require SharePoint to be installed locally. You can now easily figure out what is available and how to call any SharePoint REST API without having to search and page through mounds of documentation. Enjoy being productive!</p>Steve Curranhttp://www.blogger.com/profile/08379275170889570527noreply@blogger.com0tag:blogger.com,1999:blog-3826305938088128320.post-53075611838022492652016-03-31T14:29:00.000-07:002016-04-03T14:29:21.517-07:00Understanding SharePoint 2016 Remote API Changes (SPRemoteAPIExplorer 3.0)<div id="scid:77ECF5F8-D252-44F5-B4EB-D463C5396A79:ac3727de-40f9-4d00-b2f6-dedcffa0b116" class="wlWriterEditableSmartContent" style="float: none; padding-bottom: 0px; padding-top: 0px; padding-left: 0px; margin: 0px; display: inline; padding-right: 0px">Technorati Tags: <a href="http://technorati.com/tags/SP2016" rel="tag">SP2016</a>,<a href="http://technorati.com/tags/SP2013" rel="tag">SP2013</a>,<a href="http://technorati.com/tags/Dev" rel="tag">Dev</a>,<a href="http://technorati.com/tags/Apps" rel="tag">Apps</a></div> <p>Well SharePoint 2016 has been released and you have no clue what new features are available from the remote API. There are a total <strong>250</strong> <strong>new types</strong> you can use and <strong>changes to an existing 86 types</strong>. This results in thousands of new properties and methods that are undocumented. You can now get started on understanding the changes with the updated<strong> <a href="https://visualstudiogallery.msdn.microsoft.com/26a16717-0c9a-4367-8dfd-bb09e7e2deb5" target="_blank">SPRemoteAPIExplorer 3.0</a></strong> Visual Studio extension. I have used this extension for years to find and discover new methods to call in the SharePoint 2013 remote API. If you have used it before you will know it can generate the Ajax REST method calls for you, create JSON for REST responses and payloads or create JSON object paths for deeply nested responses. Version 3.0 now gives you the ability to switch between SharePoint 2013 and SharePoint 2016 remote API’s. It also surfaces the complex types that are used as parameters and responses, this will give you a better understanding of the wide range of types that you need to know to code effectively against SharePoint On-Premises or Online. Finally, it gives you an option to see just the types that are new or have changed, which will make it easy to see and discover some of the great things that may make your coding life easier. Remember that the SharePoint 2016 remote API applies to both On-Premises and Online, but some methods may work only in SharePoint Online.</p> <h4>Easily Identify Remote API changes in SharePoint 2016</h4> <p>To easily find the new changes right click the top node to see the main context menu and select the<strong> “View only new and changed”</strong> menu item.</p> <p><img src="https://c2.staticflickr.com/2/1475/26174134516_7ed043ca42_o.png"></p> <p>After clicking this menu item the explorer tree only shows types that are new or types that have changes. All new and changed features are identified with a <strong>“red dot”</strong> in their icons.</p> <p><img src="https://c2.staticflickr.com/2/1479/26200051495_30b6375197_o.png"></p> <h4>Show Complex Types used in the Remote API</h4> <p>Clicking the <strong>“Show complex types”</strong> menu item will load all the complex types into the explorer. Typically these types are used as parameters and responses and contain no methods. </p> <p><img src="https://c2.staticflickr.com/2/1452/25927189880_f7431d28b4_o.png"></p> <h4>Discover if Methods are supported by REST</h4> <p>Remember that you can click on a method or any node and discover it’s capabilities. If you click on a method and view its properties you can see if the method is supported for REST, CSOM or JSOM. </p> <p><img src="https://c2.staticflickr.com/2/1493/25927189950_fee116b5f0_c.jpg"></p> <h4>Generate Ajax code for Methods</h4> <p>If you select the <strong>“Create $ajax call”</strong> context menu item for a method it will copy all the code including the JSON payload for the REST call into the clipboard. You can then copy and paste this into your code. You can also generate the JSON and response paths for method responses.</p> <p><img src="https://c2.staticflickr.com/2/1639/26200051485_39235feaa7_o.png"></p> <p><img src="https://c2.staticflickr.com/2/1624/26174134496_fba84e8f0f_z.jpg"></p> <h4>Visions for Visual Studio Code</h4> <p>There have been past complaints that this extension is dependent upon SharePoint being installed locally. SPRemoteAPIExplorer 3.0 is dependent only upon Visual Studio Office Tools, but the extension relies on the “SharePoint Server Explorer” tool to surface the API nodes. Unfortunately, this explorer will only work with SharePoint being installed. So I am currently working on a VS Code extension to surface this information. Unfortunately, it will not be as full featured as this extension but it will allow other developers to access this information. Another option would be to expose this information as a service. </p> <h4>Time to Catch Up on your API</h4> <p>SharePoint 2016 is released so you better start catching up. I hope this new extension will help you do that and become more productive. I plan on keeping this up to date as API changes are pushed to On-Premises. It would be nice to be able to get this information from O365 instead of waiting for the On-Premises push. Enjoy.</p>Steve Curranhttp://www.blogger.com/profile/08379275170889570527noreply@blogger.com4tag:blogger.com,1999:blog-3826305938088128320.post-28950922291570498632016-01-28T15:25:00.000-08:002016-01-29T15:50:20.327-08:00What’s new in SharePoint 2016 Remote API Part 4 (Web)<div id="scid:77ECF5F8-D252-44F5-B4EB-D463C5396A79:71b87378-b249-48d6-aee7-9ae992e3633f" class="wlWriterEditableSmartContent" style="float: none; padding-bottom: 0px; padding-top: 0px; padding-left: 0px; margin: 0px; display: inline; padding-right: 0px">Technorati Tags: <a href="http://technorati.com/tags/SP2016" rel="tag">SP2016</a>,<a href="http://technorati.com/tags/O365" rel="tag">O365</a>,<a href="http://technorati.com/tags/ECM" rel="tag">ECM</a></div> <p>This is my fourth post about the new SharePoint 2016 remote API. This new post will cover the 18 new methods and 8 new properties in the Microsoft.SharePoint.SPWeb class. I am now working with the SharePoint 2016 RC released on January 20th. It is evident that there will be exposed methods on the Remote API which will be callable but throw a not implemented error in SharePoint On-Prem.</p> <h4>New SPWeb Methods</h4> <table cellspacing="0" cellpadding="2" width="400" border="0"> <tbody> <tr> <td valign="top" width="200"><img src="https://c2.staticflickr.com/2/1629/24300144300_26b42072d2_o.png"></td> <td valign="top" width="200"><img src="https://c2.staticflickr.com/2/1650/23968865093_47df71d56b_o.png"></td></tr></tbody></table> <h5>CreateAnonymousLink</h5> <p>This method is callable from REST or JavaScript. It takes a URL to a document and a boolean to give access to edit the document. It will return a string representing a anonymous URL that will not expire. It is not implemented in On-Prem. It is implemented in SharePoint Online but from my testing it will only work from a personal site. You cannot use it from an Add-In hosted outside of a personal site. You will receive a “MountPoint” security error. “MountPoint” is another word for OneDrive.</p> <h5>CreateAnonymousLinkWithExpiration</h5> <p>This method is callable from REST or JavaScript. It takes a URL to a document and a boolean to give access to edit the document, and a string representing an expiration date. It will return a string representing a anonymous URL that will expire on the date given. The valid formats for the date string are "yyyy-MM-ddTHH:mm:ssK", "yyyyMMddTHHmmssK". This method is not implemented in On-Prem. It is implemented in SharePoint Online but from my testing it will only work from a personal site. You cannot use it from an Add-In hosted outside of a personal site. You will receive a “MountPoint” security error.</p> <h5>CreateOrganizationSharingLink</h5> <p>This method is callable from REST or JavaScript. It takes a URL to a document and a boolean to give access to edit the document. It will return a string representing a URL that everyone in the organization can access that will not expire. It is not implemented in On-Prem. It is implemented in SharePoint Online but from my testing it will only work from a personal site. You cannot use it from an Add-In hosted outside of a personal site. You will receive a “MountPoint” security error. </p> <h5>DefaultDocumentLibrary</h5> <p>This method is usable from JavaScript and REST and will return the “MySite” document library if the SPWeb is a personal site, otherwise it returns the document library identified in a SharePoint resource file with the key of “documents_folder”, which is typically “Shared Documents”.</p> <h5>DeleteAllAnonymousLinksForObject</h5> <p>This method does what it exactly says. Given a URL it will remove all sharing permissions for anonymous links created by the previous listed methods. This is callable from both REST and JavaScript. Caller must have manage permissions privileges.</p> <h5>DeleteAnonymousLinksForObject</h5> <p>This method does exactly what DeleteAllAonymousLinksForObject does. Very difficult to tell the difference between the two. This is callable from both REST and JavaScript. Caller must have manage permissions privileges. This is not implemented in SharePoint On-Prem.</p> <p><strong>DestroyOrganizationSharingLink</strong></p> <p>This destroys a previously issued organization wide shared link using the CreateOrganizationSharingLink method using the URL passed in. Once again this method is not available on SharePoint On-Prem.</p> <h5>ForwardObjectLink</h5> <p>This method which is usable from JavaScript and REST will create and forward a view only email link to multiple users. The parameters are the URL of the document you want to share, the string representation of output from the people picker, an email subject and body. This is not available on SharePoint On-Prem. This method will throw an error if any of the people selected are external or do not have view rights to the document. </p> <h5>GetContextWebThemeData</h5> <p>This is a static method which is only available through REST. This will return a JSON representation of the new beefed up SPTheme class. The method will call and get the current web’s theme information and render all the theme information you would ever need to help you match the current theme. Below is screen shot of the result after calling JSON.parse on the return string. As you can see it returns colors, background image, fonts and much more. You will need to look more closely at Microsoft.SharePoint.Utilities.SPTheme to understand what is returned.</p> <p><img src="https://c2.staticflickr.com/2/1444/24267708349_b9e1d4a8cc_o.png"></p> <h5>GetDocumentLibraries</h5> <p>This is very nice method to return a collection of SPDocumentLibraryInformation objects for the given URL passed as an argument. This is effecient since it is a static method, making it easy to build lists of document libraries in a given web. Below is screen shot of the return data. It has a property telling you the last date the title of the document libary was changed.</p> <p><img src="https://c2.staticflickr.com/2/1706/24635646255_204ff1b3f3_b.jpg"></p> <h5>GetFileByGuestUrl and GetFileByLinkingUrl</h5> <p>Both these methods are a great way to access a file given links generated by the CreateLinkxxx methods on the SPWeb class. The GetFileByGuestUrl will validate the URL checking for the guest access token and whether it is expired. The GetFileByLinkingUrl checks for a unique id in the querystring and returns the file using that. Both are available for JavaScript and REST.</p> <h5>GetFileById and GetFolderById</h5> <p>Both these methods use the unique id (GUID) of the file or folder to return the file. Developers have been asking for this for a while. But also probably put in for OneDrive. </p> <h5>GetObjectSharingSettings</h5> <p>This static method will return a Microsoft.SharePoint.ObjectSharingSettings object. This object contains many properties that tells you what sharing capabilities the current user has for the document URL passed as an argument. For example, can the user share the document with an external user or can the current user edit the document.It also takes a group id and a boolean to use simplified roles. Not sure what these may be used for. This method is not implemented in SharePoint On-Prem and can only be used on a personal MySite in SharePoint Online. The method seems to be a great way to determine all the different permissions a user has for a document. Unfortunately, it is limited in its implementation.</p> <h5>IncrementSiteClientTag</h5> <p>This method availalble for REST and JavaScript will increment a tag value that is possibly used to flush the cached web controls for pages in a given site.</p> <h5>ShareObject</h5> <p>Static method to share an object such as a document and very similar to the ForwardObjectLink method. This method has more parameters such as the ability to propagate which settting this to true appears to solve some past problems with pushing permissions down to nested AD groups and universal security groups. This method also gives you an option to send an email or just give permissions to the object.</p> <h5>UnShareObject</h5> <p>Static method doing the opposite of the ShareObject method, both ShareObject and UnShareObject return a Microsoft.SharePoint.SharingResult.</p> <h4>New SPWeb Properties</h4> <table cellspacing="0" cellpadding="2" width="520" border="0"> <tbody> <tr> <td valign="top" width="200"><img src="https://c2.staticflickr.com/2/1647/24227907039_85e115f68a_b.jpg"></td> <td valign="top" width="318"><img src="https://c2.staticflickr.com/2/1470/23968865153_d582a21a4c_b.jpg"></td></tr></tbody></table> <h5>ContainsConfidentialInfo</h5> <p>This property returns true or false if the site contains any confidential information determined by the in place data loss protection policy. Typically this property is used when a DLP policy is implemented through the Compliance center. </p> <h5>CurrentChangeToken</h5> <p>Used by web part pages hosted in the site to determine if the site has changed.</p> <h5>DataLeakagePreventionStatusInfo</h5> <p>This property returns a Microsoft.SharePoint.SPDataLeakagePreventionStatusInfo class for the site and tells you whether the site contains confidential information and if external sharing DLP tips are enabled. This class also gives two help information URL’s for both. You can read more about DLP and SharePoint 2016 <a href="https://blogs.msdn.microsoft.com/mvpawardprogram/2016/01/13/data-loss-prevention-dlp-in-sharepoint-2016-and-sharepoint-online/">here</a>.</p> <p><img src="https://c2.staticflickr.com/2/1689/24048714704_13fa909ee5_o.png"></p> <h5>MembersCanShare</h5> <p>True or false if sharing has been enabled on the site.</p> <h5>RequestAccessEmail</h5> <p>If the site has the email service configured and is not hosted on a virtual server, then this property will return the email address to request access to the site.</p> <h5>ThemeData</h5> <p>Property which exposes the same JSON as the GetContextWebThemeData method.</p> <h5>ThirdPartyMdmEnabled</h5> <p>This property will return true if the site is using a third party mobile device management solution.</p> <h5>TitleResource</h5> <p>Returns a Microsoft.SharePoint.SPUserResource containing the title of the site. Apparently useful for publishing.</p> <h4>SharePoint 2016 is all About Sharing, Security and OneDrive</h4> <p>As you can see that many of the methods and properties are devoted to sharing and data loss prevention. This seems to be the theme in the new features for SharePoint 2016. As a developer you must be aware the remote API code bases for SharePoint On-Prem and SharePoint Online have been merged. Many of the methods are not implemented at this time in On-Prem. It is very difficult to determine this unless you experiment. It appears that some of these methods can be enabled at a later time using <strong>flighting</strong> (enabling features using code). More to come.</p>Steve Curranhttp://www.blogger.com/profile/08379275170889570527noreply@blogger.com0tag:blogger.com,1999:blog-3826305938088128320.post-83520207949562485572016-01-01T19:53:00.000-08:002016-01-02T08:53:09.674-08:00What’s new in SharePoint 2016 Remote API Part 3 (Files)<div id="scid:77ECF5F8-D252-44F5-B4EB-D463C5396A79:b02bdb37-6eda-4632-ad30-800bf01b250c" class="wlWriterEditableSmartContent" style="float: none; padding-bottom: 0px; padding-top: 0px; padding-left: 0px; margin: 0px; display: inline; padding-right: 0px">Technorati Tags: <a href="http://technorati.com/tags/ECM" rel="tag">ECM</a>,<a href="http://technorati.com/tags/SP2016" rel="tag">SP2016</a></div> <p>This is part three of a blog series about the new features available in SharePoint 2016 Beta 2 Remote API. This blog post will talk about the new features for files. Unfortunately, some of the new methods available for files do not necessarily work in Beta 2 for reasons I don’t understand as of yet. </p> <h4>File Versions</h4> <p>Getting previous version binaries in SharePoint has always been a pain. CSOM methods really did not work. Many developers used File.OpenBinaryDirect method along with the URL to the previous version which looked something like this: <code><strong><a href="http://yoursite/yoursubsite/_vti_history/512/Documents/Book1.xls">http://yoursite/yoursubsite/_vti_history/512/Documents/Book1.xls</a></strong></code></p> <p><code><font face="Arial">Of course you knew that 512 means version 1.0 or the magic number 1025 means version 2.1. There was a fomula that you were required to use to generate these numbers and construct the URL. However, even if you knew how to constuct the URL the CSOM OpenBinaryDirect method would return a 404. Most developers just used the .Net web client and the URL to get the binary. Now SharePoint 2016 has added OpenBinaryStream method on the SPFileVersion class.</font></code></p> <p><code><img src="https://c2.staticflickr.com/6/5708/24053802605_75f444dbab_o.png"></code></p><code><strong></strong></code> <p>The OpenBinaryStream method is available for both CSOM, REST and JSOM. The following is a code example using JSOM. Unfortunately, this still has the problem of decoding the binary stream similar to the issue of getting the binary for a SPFile object pointed out in <strong>Mikael Svenson</strong> blog post <a href="http://www.techmikael.com/2013/07/how-to-copy-files-between-sites-using.html">How to copy files between sites</a>. If the file is not a text file then you get a file that has all the pages but the pages are blank. This problem still exists in SharePoint 2016. So I recommend using this method only with managed CSOM. Also, you still need to know the magic number for the version to retrieve it. The SPFileVersions collection has many methods that use the label, except the method to retrieive it. I can delete by the label but not retrieve by the label. Why?</p><pre class="brush:jscript">function getVersionBinarySP() {
var dfd = $.Deferred();
var binaryData;
hostweburl = decodeURIComponent(getQueryStringParameter('SPHostUrl'));
appweburl = decodeURIComponent(getQueryStringParameter('SPAppWebUrl'));
fileContentUrl = appweburl + "/_api/SP.AppContextSite(@target)/web/GetFileByServerRelativeUrl('/sites/seconddev/testdups/testdoc0.pdf')/Versions/GetById(512)/OpenBinaryStream?@target='" + hostweburl + "'";
var executor = new SP.RequestExecutor(appweburl);
var info = {
url: fileContentUrl,
method: "GET",
binaryStringResponseBody: true,
success: function (data) {
binaryData = data.body;
dfd.resolve(binaryData);
},
error: function (err) {
dfd.reject(err);
}
};
executor.executeAsync(info);
return dfd;
}
</pre>
<h4>What’s new for Files?</h4>
<p>There a many new methods and properties available for remotely accessing SharePoint files. Some of these have already been implemented in SharePoint Online. Below is a list of the new methods. </p>
<p><img src="https://c2.staticflickr.com/6/5644/23425574044_6d36f9f3f6_o.png"></p>
<h5>StartUpload, ContinueUpload, ContinueUpload, FinishUpload Methods</h5>
<p>These methods are for uploading files in fragments (chunking) which is useful for large files when connections can be dropped or throttled. SharePoint Online has had these methods for a while to help developers with throttling. The methods are enabled for CSOM, REST and JSOM. Once again I would not try using these from JavaScript given the issues with stream encoding. I did test these with managed CSOM, below is some code:</p><pre class="brush:csharp">public void UploadFile()
{
ClientContext clientContext = new ClientContext("http://win2012r2dev/sites/seconddev");
var documentsFolder = clientContext.Web.GetFolderByServerRelativeUrl("/sites/seconddev/testdups");
Microsoft.SharePoint.Client.File uploadFile = documentsFolder.Files.GetByUrl("testdoc12.pdf");
clientContext.Load(uploadFile);
clientContext.ExecuteQuery();
using (var fs = System.IO.File.OpenRead(@"c:\wp8_enterprise_device_management_protocol.pdf"))
{
byte[] bytes = new byte[fs.Length];
fs.Read(bytes, 0, (int)fs.Length);
using (var inputStream = new MemoryStream())
{
inputStream.Write(bytes, 0, bytes.Length);
inputStream.Position = 0;
//Set up size of fragments to upload.
int chunkSize = 1000000;
int index = 0;
Int64 offset = 0;
var myGuid = Guid.NewGuid();
while (inputStream.Position < inputStream.Length)
{
byte[] buffer = new byte[chunkSize];
int chunkBytesRead = 0;
while (chunkBytesRead < chunkSize)
{
int bytesRead = inputStream.Read(buffer, chunkBytesRead, chunkSize - chunkBytesRead);
MemoryStream stream = new MemoryStream();
stream.Write(buffer, 0, buffer.Length);
if (bytesRead == 0)
{
break;
}
chunkBytesRead += bytesRead;
if (index == 0)
{
offset = uploadFile.StartUpload(myGuid, stream).Value;
clientContext.ExecuteQuery();
}
else if (inputStream.Position == inputStream.Length)
{
uploadFile.FinishUpload(myGuid, offset, stream);
clientContext.ExecuteQuery();
}
else
{
offset = uploadFile.ContinueUpload(myGuid, offset, stream).Value;
clientContext.ExecuteQuery();
}
}
index++;
}
}
}
}
</pre>
<font face="Arial">These methods all work together to help upload files in fragments. You just need to make sure to use the same GUID when sending the request. Unfortunately, I could not get this to work. I keep getting a Cobalt (File Syncronization) error of “Invalid Argument”. However, please try the code I could be missing something.</font>
<h5>ExecuteCobaltRequest Method</h5>
<p>Once again this is a method for editing files that are supported by the Office Online Server (<strong>WOPI Protocol</strong>) that has been available in Office 365. This method is now supported for SharePoint 2016 and Office Online Server Preview. There is little documentation on this method, The method is supported in managed CSOM and JSOM. The method takes a stream as an argument so probably not a good candidate for JavaScript. </p>
<h5>GetImagePreviewUrl</h5>
<h1></h1>
<p>This method returns a URL to the new image preview handler that has been used by Delve in Office 365. I blogged about using this in your own hosted Add-In or search templates <a href="http://sharepointfieldnotes.blogspot.com/2015/06/get-faster-search-previews-in.html">Get Faster Search Previews in SharePoint Online</a>. Well now a handy method will build if for you in SharePoint 2016. This will work with office, pdf, tiff, bmp and png files. You can pass in the width and height and it will calculate a resolution and send back a URL similar to this:</p>
<p><a title="http://win2012r2dev/sites/SecondDev/_layouts/15/getpreview.ashx?path=/sites/SecondDev/testdups/testdoc0.pdf&resolution=Width300&clienttype=webapp" href="http://win2012r2dev/sites/SecondDev/_layouts/15/getpreview.ashx?path=/sites/SecondDev/testdups/testdoc0.pdf&resolution=Width300&clienttype=webapp">http://win2012r2dev/sites/SecondDev/_layouts/15/getpreview.ashx?path=/sites/SecondDev/testdups/testdoc0.pdf&resolution=Width300&clienttype=webapp</a> </p>
<p>Unfortunately, the getpreview.ashx handler code will not work unless it is running on SharePoint Online. Huh?</p>
<h5><font face="Arial">GetPreAuthorizedAccessUrl</font> </h5>
<p>This method returns a URL to download the document. The method takes an integer as an argument representing the number of hours the link is good for. It has a guest token attached in the querystring. Below is an example of what is returned. You will have to log in with the userid listed in the querystring.</p>
<p><a href="http://win2012r2dev/sites/SecondDev/_layouts/15/download.aspx?guestaccesstoken=NduByOVhPo1QJyW0FEZVaikciOnDC03opCSiUWylH4s%3d&docid=0bfef46a04c964c7e983248c2051709fa&expiration=12%2f31%2f2015+6%3a11%3a57+AM&userid=1&authurl=True">http://win2012r2dev/sites/SecondDev/_layouts/15/download.aspx?guestaccesstoken=NduByOVhPo1QJyW0FEZVaikciOnDC03opCSiUWylH4s%3d&docid=0bfef46a04c964c7e983248c2051709fa&expiration=12%2f31%2f2015+6%3a11%3a57+AM&userid=1&authurl=True</a></p>
<h5>GetWOPIFrameUrl</h5>
<p>This is another convenience method. This is a URL to navigate to an office file (including PDF) in Office Online Server Preview. The method takes one argument an integer representing the SPWOPIFrameAction enumeration. This method supports, View, Edit, InteractivePreview, and MobileView. Below is an example of what is returned.</p>
<p><a title="https://stevecurran.sharepoint.com/_layouts/15/WopiFrame.aspx?sourcedoc={bec083e0-6ba6-4b87-9937-5f3c488e2f8a}&action=interactivepreview" href="https://stevecurran.sharepoint.com/_layouts/15/WopiFrame.aspx?sourcedoc={bec083e0-6ba6-4b87-9937-5f3c488e2f8a}&action=interactivepreview"><a href="http://win2012r2dev/sites/SecondDev/_layouts/15/download.aspx?guestaccesstoken=NduByOVhPo1QJyW0FEZVaikciOnDC03opCSiUWylH4s%3d&docid=0bfef46a04c964c7e983248c2051709fa&expiration=12%2f31%2f2015+6%3a11%3a57+AM&userid=1&authurl=True">http://win2012r2dev/sites/SecondDev/_layouts/15</a>/WopiFrame.aspx?sourcedoc={bec083e0-6ba6-4b87-9937-5f3c488e2f8a}&action=interactivepreview</a></p>
<h5>Update</h5>
<p>The SPFile and SPFolder now support property bags via the remote API. So you can now save metadata to your file and folders using the new Properties property along with this Update method. </p><pre class="brush:jscript">function updateFilePropertyBag() {
appweburl = decodeURIComponent(getQueryStringParameter('SPAppWebUrl'));
hostweburl = decodeURIComponent(getQueryStringParameter('SPHostUrl'));
context = new SP.ClientContext(appweburl);
appContextSite = new SP.AppContextSite(context, hostweburl);
targetWeb = appContextSite.get_web();
var file = targetWeb.getFileByServerRelativeUrl("/sites/seconddev/testdups/testdoc0.pdf")
var properties = file.get_properties();
context.load(file);
context.load(properties);
context.executeQueryAsync(function () {
var p = properties;
p.set_item("whatever", false);
file.update();
context.executeQueryAsync(function () { },
function (sender, args) {
alert(args.get_message() + '\n' + args.get_stackTrace());
});
}, function (sender, args) {
alert(args.get_message() + '\n' + args.get_stackTrace());
});
}
</pre>
<h4>Information Rights Management is a high priority in SharePoint 2016</h4>
<p>Below is a list of new properties exposed on the SPFile in th remote API. As you can see Information Rights Management is a high priority with the surfacing of two new properties InformationRightsManagementSettings and EffectiveInformationRightsManagement. The former is the default settings and the later is what actually is set for the document if IRM is enabled. These settings are stored in the SPFile property bag but these new properties make it easy to read the right property. This comes in handy if your developing an application and you want to make sure you can print or view a document. </p>
<p><img src="https://c2.staticflickr.com/6/5824/24091220596_326967d0e3_o.png"></p>
<h4>SharePoint 2016 Remote API Progress but not Perfection</h4>
<p>Well this post shows you that the remote API for files is getting better in SharePoint 2016, but problems remain. The current Beta 2 remote API seems to be ahead of the what has been implemented on the server side. Some features appear in the API but are not fully implemented or may never be implemented. If some of the methods are for O365 only then they should be removed. The merging of the API between O365 and SharePoint 2016 may cause problems for developers since it will be impossible to tell which method works in what platform. There is more information to come. I urge you to start using the remote API more so you can take advantage of new features when your customers need them and to avoid the ones that do not work.</p>Steve Curranhttp://www.blogger.com/profile/08379275170889570527noreply@blogger.com1tag:blogger.com,1999:blog-3826305938088128320.post-3003050986590369052015-12-21T20:09:00.000-08:002015-12-21T20:13:11.395-08:00Whats new in SharePoint 2016 Remote API Part 2 (Sharing)<div id="scid:77ECF5F8-D252-44F5-B4EB-D463C5396A79:9f447b96-2e1b-4b7f-ade1-0bd6c2d23157" class="wlWriterEditableSmartContent" style="float: none; padding-bottom: 0px; padding-top: 0px; padding-left: 0px; margin: 0px; display: inline; padding-right: 0px">Technorati Tags: <a href="http://technorati.com/tags/ECM" rel="tag">ECM</a>,<a href="http://technorati.com/tags/SP2016" rel="tag">SP2016</a>,<a href="http://technorati.com/tags/CSOM" rel="tag">CSOM</a></div> <p>So this is number two in the what is new in SharePoint 2016 remote API blog series. This blog post is going to cover what is new for sharing in the remote API. In addition to some new document sharing features SharePoint 2016 has new web sharing features. Below is a screen shot of what is new in both document and web sharing. The screen shot is from the <strong>SPRemoteAPI Explorer Visual Studio extension</strong> to be released soon for SharePoint 2016. </p> <p><img src="https://c1.staticflickr.com/1/761/23528623599_7716fe11b1_o.png"></p> <h4>What is new in Document Sharing?</h4> <p>The first thing you see is the CanMemberShare method on the SPDocumentSharingManager. This is available for CSOM,JavaScript but not REST since it takes a List as a parameter. This will return true or false if the current user has the permission to share documents for the document library. Some SharePoint hosted Add-In example code below:</p><pre class="brush:jscript">function canMemberShare() {
spHostUrl = decodeURIComponent(getQueryStringParameter('SPHostUrl'));
context = new SP.ClientContext.get_current();
parentContext = new SP.AppContextSite(context, spHostUrl);
web = context.get_web();
canShare = SP.Sharing.WebSharingManager.canMemberShare(context,web);
context.executeQueryAsync(function (sender,args) {
var c = canShare;
alert('success');
}, function (sender, args) {
alert('Request failed. ' + args.get_message() +
'\n' + args.get_stackTrace());
});
}
</pre>
<p>Also, the SPDocumentSharingManager’s UpdateDocumentSharingInfo method has added a new argument called <strong>propagateAcl.</strong> Setting this to true appears to solve some past problems with pushing permissions down to nested AD groups and universal security groups. The response type returned on this method call also has new properties. The method will return an array of types of all the users that were given permission to a document. You now receive the display name and email of the user along with an invitation link to the document. Ths will make it easier to generate your own email messages. </p>
<h4>Welcome to the new Web Sharing</h4>
<p>So now we have a new SPWebSharingManager. It has similar methods as the SPDocumetSharingManager. The UpdateWebSharingInformation method can be used from CSOM and JavaScript but not REST since it takes a Web as a parameter. It has some different parameters since we are sharing a Web. It enables you to allow external sharing and like the SPDocumentSharingManager it returns an array of users that you are sharing with along with display name, email and invitation link to possibly generate your own emails. Below is some example code from a SharePoint hosted Add-In:</p><pre class="brush:jscript">function shareWebJSOM() {
spHostUrl = decodeURIComponent(getQueryStringParameter('SPHostUrl'));
context = new SP.ClientContext.get_current();
parentContext = new SP.AppContextSite(context, spHostUrl);
web = context.get_web();
var userRoleAssignments = [];
var roleAssignment = new SP.Sharing.UserRoleAssignment();
//view only
roleAssignment.set_role(1);
roleAssignment.set_userId('win2012r2dev\test');
userRoleAssignments.push(roleAssignment);
var sharingResults = SP.Sharing.WebSharingManager.updateWebSharingInformation(context,web,userRoleAssignments,false,"Look at this web",true,false);
context.executeQueryAsync(function () {
var sharingResult = sharingResults[0];
u = sharingResult.get_user();
name = sharingResult.get_displayName();
email = sharingResult.get_email();
link = sharingResult.get_invitationLink();
}, function (sender,args) {
alert('Request failed. ' + args.get_message() +
'\n' + args.get_stackTrace());
});
}
</pre>
<p>Unfortunately, this will only work with an app web and not the host web. The server side code checks to make sure the Web that is passed as an argument is in the same web as the context. So if you want to do some bulk web sharing then powershell and CSOM would be ideal.</p>
<h4><font face="Arial">More ways to share in SharePoint 2016</font> </h4>
<p>SharePoint 2016 has added a lot to the remote API features and one is the new feature to share a Web. There is more to come in the next post.</p>Steve Curranhttp://www.blogger.com/profile/08379275170889570527noreply@blogger.com0tag:blogger.com,1999:blog-3826305938088128320.post-63253214030888782092015-12-14T13:21:00.000-08:002015-12-14T13:21:05.604-08:00How to make the new SharePoint Hosted Add-In deploy in SharePoint 2016 Beta 2<div id="scid:0767317B-992E-4b12-91E0-4F059A8CECA8:b2d13a9c-3592-4867-9234-692b70af64e5" class="wlWriterEditableSmartContent" style="float: none; padding-bottom: 0px; padding-top: 0px; padding-left: 0px; margin: 0px; display: inline; padding-right: 0px">Technorati Tags: <a href="http://technorati.com/tags/SP2016" rel="tag">SP2016</a>,<a href="http://technorati.com/tags/Dev" rel="tag">Dev</a>,<a href="http://technorati.com/tags/SharePoint" rel="tag">SharePoint</a></div>
<p>So I have been working with SP2016 Beta 2 and one of the complaints right now is that the <strong>Visual Studio 2015 Office Developer Tools Preview</strong> SharePoint Hosted Add-In will not deploy. So basically you need to change the target office version to 16.0 in the Visual Studio project file. The steps are illustrated below.</p>
<p>Create a new project:</p>
<p><img src="https://c2.staticflickr.com/6/5785/23722596856_4d4486e3ec_z.jpg"></p>
<p> </p>
<p>Make sure to choose SharePoint Online as the target:</p>
<p><img src="https://c2.staticflickr.com/6/5794/23453015870_91038d44e0_z.jpg"></p>
<p>Choose SharePoint Hosted Add-In:</p>
<p><img src="https://c2.staticflickr.com/6/5636/23120573114_4030afc015_z.jpg"></p>
<p>Now when you try to deploy you will get this error:</p>
<p><img src="https://c1.staticflickr.com/1/704/23748706565_26fcd37746_c.jpg"></p>
<p>Open up the project file and change the target office version to <strong>16.0</strong></p>
<p><img src="https://c2.staticflickr.com/6/5698/23121909913_0d0a1e5467_o.png"></p>
<h4>Deploy that SharePoint hosted Add-In to SharePoint 2016 Beta 2!</h4>
<p>So after you change the target office version to 16.0 you will now be able to deploy to SharePoint 2016. Microsoft is currently working on fixing this in next phase of Visual Studio 2015 Office Developer Tools. In the meantime start digging in. </p>Steve Curranhttp://www.blogger.com/profile/08379275170889570527noreply@blogger.com0tag:blogger.com,1999:blog-3826305938088128320.post-27767755052809622952015-11-30T16:08:00.000-08:002015-12-04T16:08:43.425-08:00What’s New in SharePoint 2016 Remote API Part 1<div id="scid:0767317B-992E-4b12-91E0-4F059A8CECA8:8257cd3e-b4cf-4d33-8252-22c47f8cb454" class="wlWriterEditableSmartContent" style="float: none; padding-bottom: 0px; padding-top: 0px; padding-left: 0px; margin: 0px; display: inline; padding-right: 0px">Technorati Tags: <a href="http://technorati.com/tags/ECM" rel="tag">ECM</a>,<a href="http://technorati.com/tags/SP2016" rel="tag">SP2016</a>,<a href="http://technorati.com/tags/REST" rel="tag">REST</a></div> <p>With the release of SharePoint 2016 Beta 2 last month I decided to start digging into what are some of the new features in the remote API. This will be the first in a series of posts about the new capabilities in the SharePoint 2016 remote API. Many of the new features have already shown up in earlier releases of SharePoint Online but now are available in SharePoint On-Premises. However, there are some very cool things showing up in the REST for On-Premises. Here is a short list:</p> <ul> <li>File Management <li>REST Batching <li>Document Sets <li>Compliance <li>Search Analytics <li>Work Management <li>Project Server <li>Web Sharing</li></ul> <p>In this post I will give you examples of how to use the new SP.MoveCopyUtil class with REST and a refresher on using REST batching.</p> <h4>Remember having to use SPFile and SPFolder to move and copy files?</h4> <p>To move or copy files and folders the SharePoint object model provided the MoveTo and the CopyTo methods to shuffle files around in the same web. These methods were never exposed in the remote API in SharePoint 2013. These are now exposed in the remote API in SharePoint 2016. This is great news but when it came to copying or moving files easily it is still cumbersome having to get the file or folder and call the method. If you are working with URLs like in search results it would be nice to just tell the server to take the source URL and move or copy it to another URL.</p> <h4>Enter the SP.MoveCopyUtil class </h4> <p>The new Microsoft.SharePoint.MoveCopyUtil class can be used with CSOM, JSOM or REST. It has four methods CopyFile, MoveFile, CopyFolder and MoveFolder. Each method takes two arguments the source URL and the destination URL. All the methods are limited to moving and copying in the same site. The class and method are static so the method is called with dot notation rather than with a forward slash. Very easy. Below is an example of a REST call to copy a file from a SharePoint hosted Add-In. </p><pre class="brush:jscript">function copyFile() {<br /><br /> var hostweburl = decodeURIComponent(getQueryStringParameter('SPHostUrl'));<br /> var appweburl = decodeURIComponent(getQueryStringParameter('SPAppWebUrl'));<br /> var restSource = appweburl + "/_api/SP.MoveCopyUtil.CopyFile";<br /><br /> $.ajax(<br /> {<br /> 'url': restSource,<br /> 'method': 'POST',<br /> 'data': JSON.stringify({<br /> 'srcUrl': 'http://win2012r2dev/sites/SecondDev/Shared%20Documents/wp8_protocol.pdf',<br /> 'destUrl': 'http://win2012r2dev/sites/SecondDev/testdups/wp8_protocol.pdf'<br /> }),<br /> 'headers': {<br /> 'accept': 'application/json;odata=verbose',<br /> 'content-type': 'application/json;odata=verbose',<br /> 'X-RequestDigest': $('#__REQUESTDIGEST').val()<br /> },<br /> 'success': function (data) {<br /> var d = data;<br /> },<br /> 'error': function (err) {<br /> alert(JSON.stringify(err));<br /> }<br /> }<br /> );<br /><br />}<br /></pre><br /><h4>Make Batch Rest Requests in SharePoint 2016</h4><br /><p>Office 365 has had the ability to to batch multiple REST commands into one request for a while. I have a post about this here <a href="http://sharepointfieldnotes.blogspot.com/2015/01/sharepoint-rest-api-batching-made-easy.html" target="_blank">SharePoint REST Batching Made Easy</a>.This feature is now available in SharePoint 2016. With the new ability to move and copy files and folders with the new SP.MoveCopyUtil class I thought it would be a good candidate to use to demonstrate the new batch request feature. The code below uses the <strong>RestBatchExecutor</strong> code available on <a href="https://github.com/SteveCurran/sp-rest-batch-execution" target="_blank">GitHub</a> to batch together multiple requests to copy a file using SP.MoveCopyUtil.CopyFile. Basically if builds an array of javascript objects like:</p><br /><p>{'srcUrl': 'http://win2012r2dev/sites/SecondDev/Shared%20Documents/file.pdf', 'destUrl': 'http://win2012r2dev/sites/SecondDev/testdups/file.pdf' }</p><br /><p>Then we loop through the array setting the payload property and load the request into the batch. I tried this with 50 different URLs and it executed one REST request and copied all 50. Very nice.</p><pre class="brush:jscript">function batchCopy() {<br /> appweburl = decodeURIComponent(getQueryStringParameter('SPAppWebUrl'));<br /> hostweburl = decodeURIComponent(getQueryStringParameter('SPHostUrl'));<br /><br /> var commands = [];<br /> var batchExecutor = new RestBatchExecutor(appweburl, { 'X-RequestDigest': $('#__REQUESTDIGEST').val() });<br /><br /> batchRequest = new BatchRequest();<br /> batchRequest.endpoint = appweburl + "/_api/SP.MoveCopyUtil.CopyFile";<br /> batchRequest.verb = "POST";<br /><br /> var mappings = buildUrlMappings();<br /><br /> $.each(mappings, function(k, v){<br /> batchRequest.payload = v;<br /> commands.push({ id: batchExecutor.loadChangeRequest(batchRequest), title: 'Rest batch copy file' });<br /> });<br /><br /> <br /> batchExecutor.executeAsync().done(function (result) {<br /> var d = result;<br /> var msg = [];<br /> $.each(result, function (k, v) {<br /> var command = $.grep(commands, function (command) {<br /> return v.id === command.id;<br /> });<br /> if (command.length) {<br /> msg.push("Command--" + command[0].title + "--" + v.result.status);<br /> }<br /> });<br /><br /> alert(msg.join('\r\n'));<br /><br /> }).fail(function (err) {<br /> alert(JSON.stringify(err));<br /> });<br /><br />}</pre><br /><h4>More SharePoint 2016 Remote API Features</h4><br /><p>The new SP.MoveCopyUtil class is very handy if you are dealing with URLs and don’t want to create a new SP.File every time you want to move or copy it. The same goes for folders. The class is very easy to use and works great with the new REST batching that is available. This is just the tip of the iceberg on the new remote API features. My next post will be about the new exposed methods on DocumentSets. </p> Steve Curranhttp://www.blogger.com/profile/08379275170889570527noreply@blogger.com2tag:blogger.com,1999:blog-3826305938088128320.post-7394371971175414842015-10-23T17:59:00.001-07:002015-10-23T18:04:27.364-07:00Did you know that SharePoint has a Work Management Service REST API?<div id="scid:0767317B-992E-4b12-91E0-4F059A8CECA8:3d0b8353-2cdc-4070-810b-b7695ae6fd8c" class="wlWriterEditableSmartContent" style="float: none; padding-bottom: 0px; padding-top: 0px; padding-left: 0px; margin: 0px; display: inline; padding-right: 0px">Technorati Tags: <a href="http://technorati.com/tags/SP2013" rel="tag">SP2013</a>,<a href="http://technorati.com/tags/SP2016" rel="tag">SP2016</a>,<a href="http://technorati.com/tags/REST" rel="tag">REST</a></div> <p>There has been a lot written on SharePoint’s Work Management Service and yet it still has many misconceptions by developers about the capabilities of the API. This powerful SharePoint feature aggregates, synchronizes, and updates tasks from across SharePoint, Exchange, and Project Server. Many developers may not have leveraged this feature since it cannot be called from a SharePoint Add-in. Developers have been left to use the ScriptEditor web part along with the JSOM API. In this post I will show you how you can enable the use of Work Management Service from a SharePoint Add-in on-prem, and how to use the existing REST API. </p> <h4>Enabling Add-in Support for the Work Management Service</h4> <p>In 2013 I created the SPRemoteAPIExplorer Visual Studio Extension (<a href="http://sharepointfieldnotes.blogspot.com/2013/12/make-developing-with-sharepoint-2013.html" target="_blank">Easy Development with Remote API</a>). This extension documents and makes the SharePoint on-prem remote API discoverable. This blog post explained how SharePoint uses xml files located in <strong>15/config/clientcallable</strong> directory to build a cache of metadata of what is allowed in the SharePoint remote API. Each xml file contains the name of the assembly that contains the metadata along with the <strong>“SupportAppAuth”</strong> attribute which can be set to true or false. If this attribute is set to false then SharePoint will not allow the namespaces for that remote API to be called from an Add-in. In addition, if the namespace you are calling from an Add-in does not have one of these xml files, then you receive a <strong>“does not support app authentication</strong>” error. Below is the contents of the ProxyLibrary.stsom.xml file which points to the “server stub” assembly for most of the basic SharePoint remote API.</p><pre class="brush:xml"><clientcallableproxylibrary><br /> <assemblyname supportappauth="true">Microsoft.SharePoint.ServerStub, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c</assemblyname><br /></clientcallableproxylibrary><br /></pre>When I was comparing SP2013 with SP2016 I noticed that the work management namespace has a “server stub” assembly but not an xml file in the 15/config/clientcallable directory. So I just created one just like the one in SP2016 called ProxyLibrary.Project.xml pointing to the Work Management server proxy. <pre class="brush:xml"><clientcallableproxylibrary><br /> <assemblyname supportappauth="true">Microsoft.Office.Server.WorkManagement.ServerProxy</assemblyname><br /></clientcallableproxylibrary><br /></pre><br /><p>I then just did an IIS reset and lo an behold you can now call the Work Management API from a SharePoint Add-In.</p><br /><h4>So What’s Available in REST?</h4><br /><p>Once I had added this xml file it was able to be exposed in the SPRemoteAPIExplorer extension. The extension shows all the classes and methods and whether they are available for JSOM, .Net and REST. Now I could see just about everything is available to be called from REST except one important thing … reading tasks! The UserOrderedSession.ReadTasks method takes a TaskQuery argument which cannot be serialized via JSON. It is very complex type. However, SharePoint does supports some very complex types via REST such as the SearchRequest type for REST searches. So what’s the deal?</p><br /><p>The good news is that you can do just about everything else that the JSOM API supports. Below is an example of creating a task with REST.</p><pre class="brush:jscript">function testWorkManagmentCreateTask() {<br /> hostweburl = decodeURIComponent(getQueryStringParameter('SPHostUrl'));<br /> appweburl = decodeURIComponent(getQueryStringParameter('SPAppWebUrl'));<br /> <br /> var restSource = appweburl + "/_api/SP.WorkManagement.OM.UserOrderedSessionManager/CreateSession/createtask";<br /> $.ajax(<br /> {<br /> 'url': restSource,<br /> 'method': 'POST',<br /> 'data': JSON.stringify({<br /> 'taskName': 'test REST create task',<br /> 'description': 'cool stuff',<br /> 'localizedStartDate': '10/18/2015',<br /> 'localizedDueDate': '10/25/2015',<br /> 'completed': false,<br /> 'pinned': false,<br /> 'locationKey': 5,<br /> 'editUrl': ''<br /> }),<br /> 'headers': {<br /> 'accept': 'application/json;odata=verbose',<br /> 'content-type': 'application/json;odata=verbose',<br /> 'X-RequestDigest': $('#__REQUESTDIGEST').val()<br /> },<br /> 'success': function (data) {<br /> var d = data;<br /> },<br /> 'error': function (err) {<br /> alert(JSON.stringify(err));<br /> }<br /> }<br /> );<br /><br />}<br /></pre>Another example here to get the current user's task settings: <pre class="brush:jscript">function testWorkManagmentREST() {<br /> hostweburl = decodeURIComponent(getQueryStringParameter('SPHostUrl'));<br /> appweburl = decodeURIComponent(getQueryStringParameter('SPAppWebUrl'));<br /> var restSource = appweburl + "/_api/SP.WorkManagement.OM.UserOrderedSessionManager/CreateSession/ReadAllNonTaskData/UserSettings";<br /> $.ajax(<br /> {<br /> 'url': restSource,<br /> 'method':'POST',<br /> 'headers': {<br /> 'accept': 'application/json;odata=verbose',<br /> 'content-type': 'application/json;odata=verbose',<br /> 'X-RequestDigest': $('#__REQUESTDIGEST').val()<br /> },<br /> 'success': function (data) {<br /> var d = data;<br /> },<br /> 'error': function (err) {<br /> alert(JSON.stringify(err));<br /> }<br /> }<br /> );<br /><br />}<br /></pre><br /><h4>What is the Future for Work Management REST in SP2016</h4><br /><p>SP2016 allows the Work Management API to be used from SharePoint Add-ins. Unfortunately, you still can’t read tasks from the REST API. Also, Office 365 still does not allow the API to be called from SharePoint Add-Ins. In the mean time it is good that you can still use the API from REST. If you need to learn more on how to call the Work Management REST API use the <a href="https://visualstudiogallery.msdn.microsoft.com/26a16717-0c9a-4367-8dfd-bb09e7e2deb5" target="_blank">SPRemoteAPIExplorer</a> extension. A very useful extension!</p> Steve Curranhttp://www.blogger.com/profile/08379275170889570527noreply@blogger.com1tag:blogger.com,1999:blog-3826305938088128320.post-92169274446943756832015-08-31T20:50:00.001-07:002015-08-31T20:50:41.699-07:00Using Search as a Rules Engine<div id="scid:0767317B-992E-4b12-91E0-4F059A8CECA8:96d7f306-e389-4cfc-a25c-fc1367afc176" class="wlWriterEditableSmartContent" style="float: none; padding-bottom: 0px; padding-top: 0px; padding-left: 0px; margin: 0px; display: inline; padding-right: 0px">Technorati Tags: <a href="http://technorati.com/tags/Search" rel="tag">Search</a>,<a href="http://technorati.com/tags/SP2013" rel="tag">SP2013</a>,<a href="http://technorati.com/tags/ECM" rel="tag">ECM</a></div> <p>I have recently been working on a project where we needed to evaluate the state of an object and depending on the state take certain actions. Seems like a simple coding task to get this done, unless the rules to evaluate state are completely dynamic. An application where rules need to be captured and easily changed typically calls for a rules engine. A rules engine separates business rules from the execution code. Most rules engines require the use of variables along with rules implemented in some framework code. This could be a scripting language or a full fledge programming language like C# or Java. If the rules change usually some code change must take place. </p> <p>Rules engines are divided into two parts, conditions and actions. Business applications will define conditions and the corresponding actions the application should take given the conditions. The conditions in a rules engine consist of a set of evaluations of the state of an object at any given point of time. I propose that given the capabilities of a search engine it could be used as a rules engine. Conditions in a rules engine can be converted to a query against a particular document (JSON). The query could be stored and used by the rules engine to evaluate the state of a document and then take the associated actions if the document meets the conditions. Leveraging the richness of the query language would increase the capabilities of the rules engine to define very complex rules and possibly make rule processing faster. So what would the required features of a search product be in order for it to function as a rules engine?</p> <p><img src="https://farm1.staticflickr.com/706/20262712884_65f0d4feb2_o.png"></p> <h4>Required Features of a Search Based Rules Engine</h4> <h5>Index Any Type of Document</h5> <p>The first feature of a search product to function as a rules engine would be the ability to index any type of document. In this case a document would be any valid JSON document. In addition, since application data can be very dynamic a search product with the ability to query any value of that data without the overhead of having to define the schema of the document would be even better.</p> <h5>A Rich Query Domain Specific Language</h5> <p>Application rules can be very complicated and if you are going to use search as a rules engine then the product must have a strong query DSL (Domain Specific Language). The DSL should support the grouping or chaining of queries together to form a true or false condition. The query DSL should also support the turning off of word breaking of string values. Rules typically require exact matches and some search engines word break by default. Finally the query DSL should have the ability to be easily stored and retrieved. This ability is essential since you will want to capture business rules and translate them to query DSL storing them for later execution.</p> <h5>Near Real-Time Indexing</h5> <p>How fast a document is available to be searched after indexing is the most important feature for a search rules engine. Some applications will have data that is changed and must be evaluated immediately. In this case the search product must support real-time indexing where the document is available within one second. In other cases where the data is relatively stagnant it is possible to have higher index latency.</p> <h4>SharePoint Search , Azure Search and Elasticsearch How Do They Stack Up?</h4> <h5>SharePoint Search</h5> <p>Unfortunately, SharePoint Search fails on all three features. SharePoint does not have real-time indexing. There is no ability to programmatically index a document. Secondly it cannot index any type of document. It is limited to whatever IFilters that have been enabled. Finally the query DSL (KQL) is limited. There has been innovation with Delve and the Graph query DSL, however, it is still limited to social and collaboration scenarios. </p> <h5>Azure Search</h5> <p>Azure Search is built on top of Elasticsearch and is strong in all the features except the query DSL. The query DSL remains simple and is geared more towards less robust mobile apps. You can index any type of document however you must define your schema before it can be searched on anything other than it’s ID. You can search on your document within one second of indexing. A great benefit is that all fields are filterable by default, which means they support exact value matching only. </p> <h5>Elasticsearch</h5> <p>ElasticSearch meets all the above feature requirements to be a search rules engine. You can index any document and search on it within one second and not have to define a schema. The ElasticSearch query DSL has an incredible amount of features to support a search rules engine. It has the ability to combine multiple queries into a complex Boolean expression. However, fields are not by default set up to be searched with exact matching. This will require extra index mapping configuration especially if you are wanting to query arrays of child objects. Finally the query DSL is defined in JSON making it easy to construct, store, and retrieve.</p> <h4>What about NoSQL products like DocumentDB?</h4> <p>NoSQL databases are also an ideal technology for implementing a rules engine. These types of databases can handle large complex documents, however the query DSL for these types of databases can vary and you must trade off between read and write optimizations. With Some NoSQL databases you must do some upfront indexing in order for the data to be immediately available for evaluation. </p> <h4>The Future is JSON Documents</h4> <p>It is becoming much easier to ramp up solutions using JSON documents. The richness and flexibility the format offers makes it easy to integrate multiple data flows into your enterprise solutions. This flexibility along with new search technologies can be combined to implement a fairly robust rules engine to drive some of your workflow applications. Search can play significant role in your applications. Search is not just for finding relevant documents but can be used to supplement or even drive application logic. </p> Steve Curranhttp://www.blogger.com/profile/08379275170889570527noreply@blogger.com0tag:blogger.com,1999:blog-3826305938088128320.post-28211187608926652942015-07-31T15:35:00.000-07:002015-08-02T16:05:53.123-07:00Recognized SharePoint MVP Seven Years Straight<div id="scid:0767317B-992E-4b12-91E0-4F059A8CECA8:85a009cb-3da4-4a4f-ae15-5d375f1564f2" class="wlWriterEditableSmartContent" style="float: none; padding-bottom: 0px; padding-top: 0px; padding-left: 0px; margin: 0px; display: inline; padding-right: 0px">Technorati Tags: <a href="http://technorati.com/tags/SP2013" rel="tag">SP2013</a>,<a href="http://technorati.com/tags/MVP" rel="tag">MVP</a></div> <p>I am very thankful to be awarded a seventh straight SharePoint MVP award by Microsoft. It has been a great journey starting all the way back in 2009. I am so glad to be part of a great community that shares it’s expertise and experience with others. Both SharePoint and Office 365 MVP’s dedicate a lot of time to provide others with information that can make them more productive. I have first hand experience knowing that developing for SharePoint and Office 365 can be frustrating and demanding. However, I also know that MVP’s get great satisfaction knowing they solved a problem for someone. Most MVP’s live and breath the technology they are involved in. We know that SharePoint and Office 365 is a great platform for making users productive. We are constantly obsessed with understanding and making the platform better. This is evident by the great number of sources of information that SharePoint and Office 365 MVP’s provide and contribute to. MVP’s produce code examples, best practices, blog posts, forum answers, development tools and great presentations. I love this community because when I need an answer to a technical problem I can usually find it from these resources. I am looking forward to another great year in the SharePoint community.</p> <p><img src="https://farm4.staticflickr.com/3672/20232262772_436c5a07fd_o.png"></p> Steve Curranhttp://www.blogger.com/profile/08379275170889570527noreply@blogger.com1tag:blogger.com,1999:blog-3826305938088128320.post-18418509827542967372015-06-30T07:11:00.000-07:002015-07-06T07:33:45.678-07:00Get Faster Search Previews in SharePoint Online<div id="scid:0767317B-992E-4b12-91E0-4F059A8CECA8:d363fa05-622d-44ea-a262-2af46b6af2d0" class="wlWriterEditableSmartContent" style="float: none; padding-bottom: 0px; padding-top: 0px; padding-left: 0px; margin: 0px; display: inline; padding-right: 0px">Technorati Tags: <a href="http://technorati.com/tags/Search" rel="tag">Search</a>,<a href="http://technorati.com/tags/O365" rel="tag">O365</a>,<a href="http://technorati.com/tags/Dev" rel="tag">Dev</a></div> <p>Delve was recently released in Office 365 and the experience is bit different than what you may be used to when using SharePoint Online search. The Delve experience can be useful when looking for relevant documents that your colleagues are working on. One of the great features of Delve is the display template it uses to display results. It uses cards showing an image preview with the file icon and the file name. You can add the card to a board, send a link, and view who the document is shared with. The card is somewhat similar to the callout image preview that you would get on certain content types when using SharePoint Online search. The callout image preview in search uses an IFrame and the Office Web Apps server to display office documents and PDF files. The callout is more than a preview and gives you the ability to page through the whole document, print, or even download the document. On the other hand Delve uses a new file handler called getPreview.ashx and only renders a first page image preview without all the extra functionality. This is needed since the preview is displayed inline within the results. Another added benefit of the handler is that it can render image previews for other file formats such as TIF, BMP, PNG and JPG files. In this post I will show you how to incorporate this new file handler into a search display template. The example uses the file handler to display an image within the search callout. However, it is fast and responsive enough to use within the body of your display template if you wish. You can download the templates here: <a href="https://drive.google.com/uc?export=download&id=0B0XCLS6Sa-u3YWx3aXBMUG5Wck0" target="_blank">Quick View Display Template</a></p> <h4>Which Managed Properties to Use?</h4> <p>I downloaded the Item_PDF.html and Item_Hover_PDF.html and renamed them to Item_QuickView.html and Item_QuickView_HoverPanel.html. I then added the <strong>UniqueId, SiteID, WebID, SecondaryFileExtension</strong> managed properties to each display template. I use the SecondaryFileExtension managed property rather than FileExtension because FileExtension returns DispForm.aspx for documents that are not included in the file types for search to crawl. File types like TIF, BMP, PNG and JPG are not crawled and you have no way to add them in SharePoint Online. The JavaScript in the Item_QuickView_HoverPanel.html uses the SecondaryFileExtension to compare against a valid list of file extensions that the preview handler can process. If it is a valid extension then the code builds a URL to the getPreview.ashx preview handler and sets the Img element’s src attribute to this. That simple. </p> <p><img src="https://farm4.staticflickr.com/3924/19209807759_c00349a8ab_c.jpg"></p> <h4>Fast Viewing of Images</h4> <p>The handler returns images faster than the Office Web Apps server previewer and supports more types of images. The handler does not need an IFrame making it much more lightweight and suitable for using within the body of your search results much like Delve. I tried changing the metadatatoken query string value to see if I could adjust the size returned but it had no effect. </p> <p><img src="https://farm1.staticflickr.com/286/19210149959_7809de7970_o.png"></p> <h4>The Benefits of Delve</h4> <p>The new preview handler is a new feature provided by Delve. You can take advantage of it in your search display templates. You can also just use Delve display template if you want. An a great example of this is provided by <a href="http://techmikael.blogspot.com/2014/10/creating-delve-clone-using-content.html" target="_blank">Mikael Svenson</a> where he created a Delve clone for the Content Search Web Part.</p> Steve Curranhttp://www.blogger.com/profile/08379275170889570527noreply@blogger.com0tag:blogger.com,1999:blog-3826305938088128320.post-30950758683270325612015-05-28T06:51:00.000-07:002015-05-28T13:23:47.683-07:00Get a Handle on Your SharePoint Site Closure and Deletion Policies with JavaScript<div id="scid:0767317B-992E-4b12-91E0-4F059A8CECA8:9525bc68-48ed-4747-b606-4e7850ce0669" class="wlWriterEditableSmartContent" style="float: none; padding-bottom: 0px; padding-top: 0px; padding-left: 0px; margin: 0px; display: inline; padding-right: 0px">Technorati Tags: <a href="http://technorati.com/tags/SP2013" rel="tag">SP2013</a>,<a href="http://technorati.com/tags/O365" rel="tag">O365</a>,<a href="http://technorati.com/tags/Apps" rel="tag">Apps</a>,<a href="http://technorati.com/tags/Add-ins" rel="tag">Add-ins</a>,<a href="http://technorati.com/tags/Dev" rel="tag">Dev</a></div>
<p>What is great about SharePoint hosted Add-ins (Apps) is that you can come up with some very interesting ideas on how to make people’s life so much more productive. SharePoint has the ability to define s site policy for closing and deleting sites over a period of time. This is great when you are trying to manage many sites and sub sites that tend to proliferate over time. There has been a lot written about how this works and the benefits <a href="https://technet.microsoft.com/en-us/library/jj219569.aspx" target="_blank">Overview of Site Policies</a>. In this post I am going to give you some ideas on how you could create a SharePoint hosted Add-in that could help make it easier to view how your policies have been applied. I will also give you an overview on what is available for site policy management with JavaScript. </p>
<h4>Site Policy and JavaScript</h4>
<p>There is some documentation on the .Net managed remote API for managing site policies but of course there is none for JavaScript. You can use the <strong>Microsoft.Office.RecordsManagement.InformationPolicy.ProjectPolicy</strong> namespace for the .Net managed remote API but you must load the <strong>SP.Policy.js</strong> file and use the <strong>SP.InformationPolicy.ProjectPolicy</strong> namespace in JavaScript. Apparently, applying site policies to a web is considered a project. All methods except SavePolicy are static methods. Also, every methods except SavePolicy takes a target SP.Web and the current context as arguments. Unfortunately, none of the methods are callable via the REST interface because the SP.Web is not included in the entity model. Still waiting on this. The following methods are available for managing site policies:</p>
<p><strong>ApplyProjectPolicy</strong>: Apply a policy to a target web. This will replace the existing one. </p>
<p><strong>CloseProject</strong>: This will close a site. When a site is closed, it is trimmed from places that aggregate open sites to site members such as Outlook, OWA, and Project Server. Members can still access and modify site content until it is automatically or manually deleted.</p>
<p><strong>DoesProjectHavePolicy</strong>: This will return true if the target web argument has a policy applied to it.</p>
<p><strong>GetCurrentlyAppliedProjectPolicyOnWeb</strong>: Returns the policy currently applied to the target web argument.</p>
<p><strong>GetProjectCloseDate</strong>: Returns the date when the target web was closed or will be closed. Returns (System.DateTime.MinValue) if null.</p>
<p><strong>GetProjectExpirationDate</strong>: Returns the date when the target web was deleted or will be deleted. Returns (System.DateTime.MinValue) if null.</p>
<p><strong>GetProjectPolicies</strong>: Returns the available policies that you can apply to a target web.</p>
<p><strong>IsProjectClosed</strong>: Returns true if the target web argument is closed.</p>
<p><strong>OpenProject</strong>; Basically the opposite of the CloseProject method.</p>
<p><strong>PostPoneProject</strong>: Postpones the closing of the target web if it is not all ready closed.</p>
<p><strong>SavePolicy</strong>: Saves the current policy. </p>
<p>When working with policies you have the Name, Description, EmailBody, EmailBodyWithTeamMailBox, and EmailSubject. You can only edit EmailBody, EmailBodyWithTeamMailBox and EmailSubject, and then call SavePolicy. There are no remote methods to create a new ProjectPolicy. </p>
<h4>Applying a Site Policy with JavaScript Example</h4>
<p>Below is an example of using the JavaScript Object Model to apply a site policy to a SP.Web. The code example is run from a SharePoint hosted Add-in and applies an available site policy to the host web. Of course your Add-in will need full control on the current site collection to do this. </p><pre class="brush:js">function applyProjectPolicy() {
appweburl = decodeURIComponent(getQueryStringParameter('SPAppWebUrl'));
hostweburl = decodeURIComponent(getQueryStringParameter('SPHostUrl'));
context = SP.ClientContext.get_current();
appContextSite = new SP.AppContextSite(context, hostweburl);
targetWeb = appContextSite.get_web();
policies = SP.InformationPolicy.ProjectPolicy.getProjectPolicies(context, targetWeb);
context.load(policies);
context.executeQueryAsync(function () {
policyEnumerator = policies.getEnumerator();
while (policyEnumerator.moveNext()) {
p = policyEnumerator.get_current();
if (p.get_name() == "test my policy") {
SP.InformationPolicy.ProjectPolicy.applyProjectPolicy(context, targetWeb, p);
context.executeQueryAsync(function () {
alert('applied');
}, function (sender,args) {
alert(args.get_message() + '\n' + args.get_stackTrace());
});
}
}
}, function (sender, args) {
alert(args.get_message() + '\n' + args.get_stackTrace());
});
}
</pre>
<h4>Getting a Better View of Your Policies</h4>
<p>When applying a site policy to a target SP.Web all the information is stored in a hidden site collection list with the title of <strong>“Project Policy Items List”</strong>. Typically you would have to go to each site and click on <strong>“Site Settings”</strong> and click on <strong>“Site Closure and Deletion”</strong> to see what policy is applied. This informational page will show you when the site is due to close and/or be deleted. You can also immediately close it or postpone the deletion from this page. Instead of navigating to all these sites to view this information you could navigate the <strong>“Project Policy Items List”</strong> directly using he URL <a title="http://basesmc15/ProjectPolicyItemList/AllItems.aspx" href="http://rootsite/ProjectPolicyItemList/AllItems.aspx">http://rootsite/ProjectPolicyItemList/AllItems.aspx</a>. The <strong>AllItems</strong> view can be modified to display all the sites that have policies applied along with the expiration dates and even the number of times the deletion has been postponed. </p>
<p><img src="https://farm9.staticflickr.com/8835/18178529631_e710155391_c.jpg"></p>
<p>Of course you probably don’t want to expose this list anywhere in the site collection navigation. You also want to be careful not to modify any of this information since it is used to control the workflows that close and delete sites. Your best bet here is to write a SharePoint Add-in to surface this data where it cannot be inadvertently modified. You can make a rest call to get these items and then load the data into the grid of your choice. </p><pre class="brush:js">function getProjectPolicyItems() {
appweburl = decodeURIComponent(getQueryStringParameter('SPAppWebUrl'));
hostweburl = decodeURIComponent(getQueryStringParameter('SPHostUrl'));
sourceUrl = appweburl + "/_api/SP.AppContextSite(@target)/web/lists/getbytitle('Project Policy Item List')/items?@target='" + hostweburl + "'";
$.ajax({
'url': sourceUrl,
'method': 'GET',
'headers': {
'accept': 'application/json;odata=verbose'
},
success: function (data) {
d = data;
},
error: function (err) {
alert(JSON.stringify(err));
}
});
}
</pre>
<p><img src="https://farm8.staticflickr.com/7749/17989482918_7ecb2b395c_b.jpg"></p>
<h4>Creating Value with JavaScript</h4>
<p>It is easy to create a SharePoint Add-in to put this data into a custom grid and then have actions to call the SharePoint JSOM to change policies on sites, re-open closed sites, postpone deletion or change the email that is sent out. You could select multiple sites and apply the action once. There are many possibilities to increase productivity. The one thing that is missing from the SharePoint Remote API is having the ability to view site policies settings. These settings are important when you want information about a policy that is applied to the site. You may want to know what type of site policy it is, for example, is it a close and delete policy or just a close policy? Can users postpone the deletion? Is email notification enabled and how often will it be sent? This would be information an administrator would want to quickly view from a SharePoint Add-in. Unfortunately, this information is stored in a property of the ContentType called XmlDocuments which is not available in the SharePoint Remote API. Every time you create a new site policy it creates a new ContentType in the root web of the site collection. All the site policy settings are stored as an xml document in the XmlDocuments property. It would be nice to have this information and especially if could be returned as JSON. </p>
<p>The JSOM and REST SharePoint Remote API still has many sections that are not documented. This is a shame because more and more developers are turning to creating client side Add-ins for their simplicity in deployment and configuration. I hope this post helped you understand what is available in the client side SharePoint Remote API for site policy management. Many times just because it is not listed in MSDN does not mean it is not available. Keep digging!</p>Steve Curranhttp://www.blogger.com/profile/08379275170889570527noreply@blogger.com6tag:blogger.com,1999:blog-3826305938088128320.post-73732927963808813692015-04-27T13:26:00.001-07:002015-04-27T13:59:42.384-07:00SharePoint Search, Azure Search and ElasticSearch<div id="scid:0767317B-992E-4b12-91E0-4F059A8CECA8:460370fd-759f-4c5f-801a-c6a38ca24d41" class="wlWriterEditableSmartContent" style="float: none; padding-bottom: 0px; padding-top: 0px; padding-left: 0px; margin: 0px; display: inline; padding-right: 0px">Technorati Tags: <a href="http://technorati.com/tags/SP2013" rel="tag">SP2013</a>,<a href="http://technorati.com/tags/O365" rel="tag">O365</a>,<a href="http://technorati.com/tags/ECM" rel="tag">ECM</a>,<a href="http://technorati.com/tags/Search" rel="tag">Search</a>,<a href="http://technorati.com/tags/Azure" rel="tag">Azure</a></div> <p>In the past six months I have been developing solutions using SharePoint, Azure and ElasticSearch. I wanted to write a post doing a brief comparison between the three search technologies. I also want to voice my concerns and hopes regarding the direction of SharePoint search. Microsoft has created Azure Search which is an abstraction running on top of ElasticSearch. Azure Search is still only in preview however it seems to be Microsoft’s focus for searching in the Cloud. The question is why was the focus not to use SharePoint search? In this post I will try to give you some reasons and why I think SharePoint search needs to incorporate some of the great features you see in ElasticSearch.</p> <h4>What is Unstructured Data?</h4> <p>Application data is seldom just a simple list of keys and values. Typically it is a complex data structure that may contain dates, geo locations, other objects, or arrays of values. <p>One of these days your going to want to store this data in SharePoint, can you say InfoPath? Trying to do this with SharePoint is the equivalent of trying to squeeze your rich, expressive objects into a very big spreadsheet: you have to flatten the object to fit the document library schema—usually one field per column—basically you lose all the expressive and relational data that your business needs. <p>Application data can be stored as JavaScript Object Notation, or <a href="http://en.wikipedia.org/wiki/Json"><em>JSON</em></a>, as the serialization format for documents. JSON serialization is supported by most programming languages, and has become the standard format used by the NoSQL movement. It is simple, concise, and easy to read. <p>Consider this JSON document, which represents an invoice:</p> <p><pre class="brush:js">{<br> "vendorname": "Metal Container",<br> "items": [<br> {<br> "productdesc": "50 gal cannister",<br> "productid": 1256,<br> "productuom": "ea",<br> "quantity": 12,<br> "price": 25<br> },<br> {<br> "productdesc": "25 gal drum",<br> "productid": 1257,<br> "productuom": "ea",<br> "quantity": 12,<br> "price": 10<br> }<br> ],<br> "discountamt": 5,<br> "discountdate": "2014-02-28T00:00:00",<br> "vendor": 1600,<br> "duedate": "2014-03-31T00:00:00",<br> "invoicetotal": 420,<br> "invoicenumber": 2569<br> }</pre><br /><p>This invoice object is complex, however the structure and meaning of the object has been retained in the JSON. Azure Search and ElasticSearch are <em>document oriented</em>, meaning that they store entire objects or documents. They also index the contents of each document in order to make them searchable. Document oriented searching indexes, searches, sorts, and filters documents on the whole object not just on key value pairs. This is a fundamentally different way of thinking about data and is one of the reasons document oriented search can perform complex searches.<br /><h4>A Comparison of Searches</h4><br /><p><img src="https://farm8.staticflickr.com/7694/17105947250_2380016f74_b.jpg"></p><br /><p>Above is a table listing a few features to compare the search technologies. Granted these are just a few and there are many other factors to compare. All of the features except for “Index Unstructured Data” are features focused on by search consumers. </p><br /><h4>SharePoint Search</h4><br /><p>SharePoint has a limit of 100 million indexed items per search service application. However, SharePoint’s strength is in crawling and indexing binary data. The other two do not come close to matching SharePoint’s capabilities. SharePoint has an extendable infrastructure which allows you to add your own custom content filtering and enrichment. SharePoint search out of the box can crawl many different types of file stores making it easy to get up an running. SharePoint’s query language (<strong>KQL</strong>) is rich enough to allow more knowledgeable developers to create some informative search experiences for users. SharePoint search has a huge advantage over Azure and ElasticSearch when it comes to security trimming search results. SharePoint can trim results to the item level using access control lists associated with the document. SharePoint even has the ability to customize security trimming with a post security interface you can implement.</p><br /><p><a href="https://msdn.microsoft.com/en-us/library/office/ee558911.aspx" target="_blank">Keyword Query Language (KQL) syntax reference</a></p><br /><h4>Azure Search</h4><br /><p>According to preliminary documentation one single Azure dedicated search service is limited to indexing 180 million items. This is based on 15 million items per partition with a maximum of 12 partitions per service. As with SharePoint you could increase the number of total items if you created more search services. Azure search does not support crawling and indexing binary data. It is up to you to push or pull the document data into the index. You can push data into the index with with the Azure Search easy to use API in either REST or .NET. Azure Search also supports pulling the data through it’s built in Indexers that support Azure DocumentDB , Azure SQL or Azure hosted SQL. An Azure indexer can be scheduled to periodically run and sync changes with the index. This is very similar to a SharePoint crawl except Azure indexers do not index binary data such as images. Full-Text searching of document object fields is supported. Azure search supports authentication except not on a user level but through an api-key passed through an HTTP header. Theoretically you can control user access through the OData <strong>$filter</strong> command in the API. Azure has its own query language which uses the basic operators such as ge, ne, gt, lt. It does have some geospatial functions for distance searching.</p><br /><p><a href="https://msdn.microsoft.com/en-us/library/azure/dn798921.aspx" target="_blank">Azure OData Expression Syntax for Azure Search</a></p><br /><p>Just remember that Azure Search is an abstraction layer that runs on top of ElasticSearch.</p><br /><h4>ElasticSearch</h4><br /><p>ElasticSearch is an open source java based free search product that runs on top of <a href="https://lucene.apache.org/core/" target="_blank">Lucene</a>. Lucene search has been around for a while but it is very complex. ElasticSearch is a product that mixes analytics with search and can create some very powerful insights into your index. It can index an unlimited number of items just as long as you have the servers to support it. Horizontally scaling your search could not be easier. This is why it was chosen by Microsoft to be used in Azure. It does not support crawling. It supports pushing data into the index via an easy to use REST API. It also supports pulling data using a pluggable “river” API. Rivers can be plugged in for popular NoSQL databases such as CouchDB and MongoDB. Unfortunately, rivers are now deprecated in version 1.5. However, you should be able to obtain comparable “LogStash” services which will push the data changes into the index. Azure Search more than likely is using LogStash to push data into their own instances of ElasticSearch. Security trimming is limited in ElasticSearch. It supports roles that can be synced with LDAP or AD via the “Shield” product. However, these roles do not offer item level security trimming like SharePoint does. The roles are typically used to limit access to certain indexes. ElasticSearch does support full-text searching of binary data such as images. I successfully achieved this MongoDB and GridFS. However, as with SharePoint storing indexing binary data takes up a lot of storage. ElasticSearch has a full fledged sophisticated query language allowing you to search and compare nested objects within documents all executed through a REST API.</p><br /><p><a href="http://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl.html" target="_blank">ElasticSearch Query DSL</a></p><br /><h4>So What is the Big Deal about Unstructured Data?</h4><br /><p>Many businesses use SharePoint to store transactional content like forms and images. Through forms processing, complex data can be captured that contains parent and child sectional data. Businesses operate on many types of forms with data being organized on the form for a purpose. For example with an invoice it has child line item details that is important data to a business. If the forms processor can create a JSON object capturing the invoice as an entity, then with a NoSQL repository it can be stored intact. SharePoint on the other hand would force you to store the invoice within two lists, one for the invoice and the other for the line items. From a search perspective you would lose the relationship between the invoice and the invoice’s line items.</p><br /><p>Relationships matter when it comes to search. For example account payable departments may use a <strong>“three-way matching”</strong> payment verification technique to ensure that only authorized purchases are reimbursed, thereby preventing losses due to fraud and carelessness. This technique matches the supplier invoice to the related purchase order by checking what was ordered versus what was billed. This of course would require checking line item detail. Finally, the technique then matches the invoice to a a receiving document ensuring that the quantity received is what was billed. </p><br /><p><img src="https://farm8.staticflickr.com/7693/16671672063_8cfb0b3aa9_o.png"></p><br /><p>Having the ability to store the document data as JSON enables business to automate this process using search technologies that index this type of data. SharePoint does not have this ability, Azure Search’s query language currently is not sophisticated enough to do this. However, ElasticSearch’s query language is capable of matching on nested objects in these types of scenarios. Being able to leverage your search to automate a normally labor intensive process can save a business a lot of money. </p><br /><h4>Search Makes a Difference</h4><br /><p>Microsoft is moving in the right direction with search. In Azure Microsoft is building services around NoSQL and NoSQL searching. However, the focus is still more about mobility, social and collaboration. These are important, but many businesses run on transactional data such as forms and images. I would like to see SharePoint have the ability to integrate better with Azure DocumentDB and Search, opening up the query language more to enable the rich query features of ElasticSearch. In addition, it is imperative that Microsoft come up with a better forms architecture enabling the use of JSON rather than XML for storage. This would open many opportunities to leverage search such as automating some transactional content management workflows step, building more sophisticated e-discovery cases and intelligent retention policies.</p> Steve Curranhttp://www.blogger.com/profile/08379275170889570527noreply@blogger.com1tag:blogger.com,1999:blog-3826305938088128320.post-21373846499561795012015-03-30T08:54:00.001-07:002015-03-30T08:54:01.079-07:00Easy debugging TypeScript and CoffeeScript in Sharepoint Apps with SPFastDeploy 3.6<div id="scid:0767317B-992E-4b12-91E0-4F059A8CECA8:b5f2526d-6da9-43be-9baf-26876b10d721" class="wlWriterEditableSmartContent" style="float: none; padding-bottom: 0px; padding-top: 0px; padding-left: 0px; margin: 0px; display: inline; padding-right: 0px">Technorati Tags: <a href="http://technorati.com/tags/SP2013" rel="tag">SP2013</a>,<a href="http://technorati.com/tags/O365" rel="tag">O365</a>,<a href="http://technorati.com/tags/Apps" rel="tag">Apps</a>,<a href="http://technorati.com/tags/Dev" rel="tag">Dev</a>,<a href="http://technorati.com/tags/VS2013" rel="tag">VS2013</a></div> <p><a href="https://visualstudiogallery.msdn.microsoft.com/9e03d0f5-f931-4125-a5d1-7c1529554fbd" target="_blank">SPFastDeploy 3.6</a></p> <p>If you have been developing SharePoint hosted apps for a while then you may be using TypeScript or CoffeeScript to generate the JavaScript code. You can debug the generated JavaScript in the browser but it is hard to determine where in the TypeScript the error is occurring. Now with source mapping you can link the JavaScript to the TypeScript and step through the code. This makes it easier to figure out exactly where the code is breaking. <a href="http://blogs.msdn.com/b/davrous/archive/2014/08/22/enhance-your-javascript-debugging-life-thanks-to-the-source-map-support-available-in-ie11-chrome-opera-amp-firefox.aspx" target="_blank">Enhance your JavaScript debugging life</a>. If you have included TypeScript in your Visual Studio project you can check to make sure you are generating the source map for the TypeScript using the project’s TypeScript Build settings.</p> <p><img src="https://farm9.staticflickr.com/8685/16945630931_12b8262bf5_c.jpg"></p> <h4>SPFastDeploy makes it easy to step through TypeScript</h4> <p><strong>SPFastDeploy</strong> has the feature to automatically deploy your code changes to a SharePoint app web when saving. This feature deploys the JavaScript that is generated when using TypeScript or CoffeeScript. However, in order to step through your TypeScript code you must also deploy the corresponding source map and TypeScript files. <strong>Version 3.6</strong> now has the option to deploy all three files (JavaScript, source map and TypeScript) when saving. Just set the <strong>“Include source and source map” </strong>option to true. </p> <p><img src="https://farm8.staticflickr.com/7286/16759478560_b56bd27e8d_o.png"></p> <p>Now when you save your changes <strong>SPFastDeploy</strong> will wait for the TypeScript to compile and generate the JavaScript. It will then look for the corresponding source map and Typescript file and deploy all three files to the SharePoint App. <img src="https://farm9.staticflickr.com/8717/16759234628_a66d1ab51b_o.png"></p> <p><strong>SPFastDeploy</strong> only supports deploying source maps files when it is located in the same directory as the source file. You can now refresh your browser making sure the cache is cleared and start stepping through your changes in TypeScript.</p> <p><img src="https://farm9.staticflickr.com/8730/16978883085_6c1bf75cf1_b.jpg"></p> <h4>Increase your SharePoint development productivity with SPFastDeploy 3.6 and TypeScript </h4> <p>With this release you can now get the benefits of immediately deploying your code changes when saving and the ability to step through your TypeScript code. Previous versions did not support deploying source map and TypeScript files. Now one click can deploy all three. Also, this release will enable you to right click source map and TypeSript files in the solution explorer and deploy them to your SharePoint App site. Finally, remember all the support for TypeScript is available for CoffeeScript. Thanks to Mikael Svenson for asking for this feature.</p> Steve Curranhttp://www.blogger.com/profile/08379275170889570527noreply@blogger.com7tag:blogger.com,1999:blog-3826305938088128320.post-44257005582628181332015-02-27T13:58:00.001-08:002015-03-03T06:35:37.699-08:00Easy SharePoint App Model Deployment for SASS Developers (SPFastDeploy 3.5.1)<div id="scid:0767317B-992E-4b12-91E0-4F059A8CECA8:f1fa03bb-6079-49c6-b988-5ea94feb1a06" class="wlWriterEditableSmartContent" style="float: none; padding-bottom: 0px; padding-top: 0px; padding-left: 0px; margin: 0px; display: inline; padding-right: 0px">Technorati Tags: <a href="http://technorati.com/tags/SP2013" rel="tag">SP2013</a>,<a href="http://technorati.com/tags/O365" rel="tag">O365</a>,<a href="http://technorati.com/tags/Apps" rel="tag">Apps</a>,<a href="http://technorati.com/tags/Dev" rel="tag">Dev</a>,<a href="http://technorati.com/tags/VS2013" rel="tag">VS2013</a></div> <p>Last October I added support to the <strong><a href="http://bit.ly/1AvNICy" target="_blank">SPFastDeploy</a></strong> Visual Studio extension to deploy a file to SharePoint while saving <strong>CoffeeScript</strong> and <strong>LESS</strong> files. In the latest release I have added support for <strong>SASS(Syntactically Awesome Style Sheets)</strong> developers. There has seemed to be a growing interest for SharePoint developers and designers to use SASS. Visual Studio along with the Web Essentials extension supports compiling SCSS files and generating CSS when saving. The <strong><a href="http://bit.ly/1AvNICy" target="_blank">SPFastDeploy</a></strong> extension will automatically deploy the CSS file generated to the SharePoint hosted application. </p> <p><img src="https://farm9.staticflickr.com/8561/16045980893_6ba89a8b53_o.png"></p> <p> </p> <p>It will also support deploying the minified CSS file if that option is selected in Web Essentials and you select the <strong>DeployMinified</strong> option in the <strong>SPFastDeploy</strong> options.</p> <p><img src="https://farm9.staticflickr.com/8657/16665910295_cc1db55164_b.jpg"></p> <p>Finally I have added cross domain support. When you are doing SharePoint app model development on a different domain than the domain you are deploying to, <strong>SPFastDeploy</strong> will prompt you for credentials. This is similar to what Visual Studio does when selecting the “Deploy Solution” menu item. You will only have to enter your credentials once per Visual Studio session. </p> <p><img src="https://farm9.staticflickr.com/8621/16479763199_9b5732becc_o.png"></p> <p>So now CSS with superpowers can also be easily customized and tested using <strong>SPFastDeploy</strong>. Make a change and hit the save button. Refresh your browser and see your style change. CSS can actually be fun again. Doing remote SharePoint app model development no problem either. Enjoy!</p> <table> <tbody> <tr> <td><script type="text/javascript" src="http://sharepointads.com/members/scripts/banner.php?a_aid=110158&a_bid=a4ceed22"></script></td> <td><script type="text/javascript" src="http://sharepointads.com/members/scripts/banner.php?a_aid=110158&a_bid=7755938e"></script></td></tr></tbody></table> Steve Curranhttp://www.blogger.com/profile/08379275170889570527noreply@blogger.com0tag:blogger.com,1999:blog-3826305938088128320.post-54556466780185813672015-01-21T15:01:00.001-08:002015-03-18T08:21:00.875-07:00SharePoint REST API Batching Made Easy<div id="scid:0767317B-992E-4b12-91E0-4F059A8CECA8:661463f7-9bc9-458f-96de-3c07386b6fe1" class="wlWriterEditableSmartContent" style="float: none; padding-bottom: 0px; padding-top: 0px; padding-left: 0px; margin: 0px; display: inline; padding-right: 0px">Technorati Tags: <a href="http://technorati.com/tags/SP2013" rel="tag">SP2013</a>,<a href="http://technorati.com/tags/O365" rel="tag">O365</a>,<a href="http://technorati.com/tags/REST" rel="tag">REST</a></div> <p><font size="3" face="Georgia">Well the ability to batch SharePoint REST API requests has finally been made available on Office 365. This has been long awaited in order to bring the SharePoint REST API close to the OData specification. In addition it was needed to help developers who preferred to use REST over JSOM/CSOM write more efficient less “chatty” code. The REST API had no ability to take multiple requests and submit them in one network request. Andrew Connell has a great post </font><a href="http://www.andrewconnell.com/blog/part-2-sharepoint-rest-api-batching-exploring-batch-requests-responses-and-changesets" target="_blank"><font size="3" face="Georgia">SharePoint REST API Batching</font></a><font size="3" face="Georgia"> explaining how to use the $Batch endpoint. Using the new $Batch endpoint is not easy. Even though the capability follows closely the OData specification for batching, it does not mean it is easy to use for developers. In order to make successful batch requests you must adhere to certain rules. Most of these rules revolve around making sure the multiple endpoint’s, JSON payloads and request headers are placed in the correct position and wrapped with change set and batch delimiters. The slightest deviation from the rules can result in an unintelligible response leaving a developer wondering whether any of his requests were successful. However, the most difficult part of REST batch requesting was what to do with the results. Even if you were successful at concatenating your request together, trying to tie the request with the result seemed impossible. The OData specification states that it would be nice if the back end service sent a response that contained the same change set ID as the request, but it is not required. </font></p> <p><font face="Bell MT"><font size="3" face="Georgia">I love the SharePoint REST API. To me there is something more simpler about using an endpoint instead of creating multiple objects to do the same thing. What to do? In this post I will show you a new JavaScript library I created to make it simple to take your REST requests and put them into one batch request. The library also makes it easy to access the results from the multiple requests. I have tested the library only within a O365 hosted application.</font> </font></p> <h4><font size="4" face="Georgia">Using the RestBatchExecutor</font></h4> <p><font size="3" face="Georgia">The RestBatchExecutor library can be found here </font><a href="https://github.com/SteveCurran/sp-rest-batch-execution" target="_blank"><font size="3" face="Georgia">RestBatchExecutor GitHub</font></a><font size="3" face="Georgia">. The RestBatchExecutor encapsulates all the complexity of wrapping your REST requests into one change set and batch. First create a new RestBatchExecutor. The constructor requires the O365 application web URL and an authentication header. The URL will be used to construct the $Batch endpoint where the requests will be submitted. The authentication header in the form of a JSON object allows for you to either use the formDigest or the OAuth token.</font></p><pre class="brush:jscript">var batchExecutor = new RestBatchExecutor(appweburl, { 'X-RequestDigest': $('#__REQUESTDIGEST').val() });<br /></pre><br /><p><font size="3" face="Georgia">The next step is to create a new BatchRequest for each request to be batched. Set the BatchRequest’s endpoint property to your REST endpoint. Second set the payload property to any JSON object you want to send with your request, this is typically what you would put in the data property of an JQuery $ajax request. Third, set the verb property. The verb property represents the HTTP request you typically use. For example, if you are updating a list item then use the verb MERGE. This is always set using the “X-HTTP-Method” header. However this verb must be used at the beginning of your endpoint when submitting requests to $Batch. Other verbs would be POST,PUT,DELETE. Finally you can optionally set the headers property. In the case of a DELETE, MERGE or PUT you should set your “If-Match” header to either the etag of the entity or an “*”. The headers also allows you to take advantage of JSON Light by setting the “accept” header to “application/json;odata=nometadata” for example. </font></p><br /><p><font size="3" face="Georgia">The example below shows three defined endpoints and the creation of three batch requests, representing a list item creation, update and retrieval of the list. After creating a BatchRequest you will need to add it to the RestBatchExecutor using either the loadChangeRequest or loadRequest method. The loadChangeRequest should only be used to add requests that use the POST,DELETE,MERGE or PUT verbs. This makes sure all your write requests are sent in one change request. Use the loadRequest method when doing any type of GET requests. always save the unique token that is returned by both these methods. This token will be used to access the results. In the example I assign the token to an array along with a title for the operation.</font> </p><pre class="brush:js">var createEndPoint = appweburl<br /> + "/_api/SP.AppContextSite(@target)/web/lists/getbytitle('coolwork')/items?@target='" + hostweburl + "'";<br /><br />var updateEndPoint = appweburl<br /> + "/_api/SP.AppContextSite(@target)/web/lists/getbytitle('coolwork')/items(134)?@target='" + hostweburl + "'";<br /><br />var getEndPoint = appweburl<br /> + "/_api/SP.AppContextSite(@target)/web/lists/getbytitle('coolwork')/items?@target='" + hostweburl + "'&$orderby=Title";<br /><br />var commands = [];<br /><br />batchRequest = new BatchRequest();<br />batchRequest.endpoint = createEndPoint;<br />batchRequest.payload = { '__metadata': { 'type': 'SP.Data.CoolworkListItem' }, 'Title': 'SharePoint REST' };<br />batchRequest.verb = "POST"<br />commands.push({ id: batchExecutor.loadChangeRequest(batchRequest), title: 'Rest Batch Create' });<br /><br />var batchRequest = new BatchRequest();<br />batchRequest.endpoint = updateEndPoint;<br />batchRequest.payload = { '__metadata': { 'type': 'SP.Data.CoolworkListItem' }, 'Title': 'O365 REST' };<br />batchRequest.headers = { 'IF-MATCH': "*" };<br />batchRequest.verb = "MERGE";<br />commands.push({ id: batchExecutor.loadChangeRequest(batchRequest), title: 'Rest Batch Update' });<br /><br />batchRequest = new BatchRequest();<br />batchRequest.endpoint = getEndPoint;<br />batchRequest.headers = { 'accept': 'application/json;odata=nometadata' }<br />commands.push({ id: batchExecutor.loadRequest(batchRequest), title: "Rest Batch Get Items" });<br /></pre><br /><h4><font size="4" face="Georgia">Executing and Getting Batch Results</font></h4><br /><p><font size="3" face="Georgia">So now you created and loaded your requests lets submit the request and get the results. The example below uses the RestBatchExecutor’s executeAsync method. This method takes an optional JSON argument of <strong>{crossdomain:true} </strong>which tells the method to use either the SP.RequestExecutor for cross domain requests or just use the default JQuery $ajax method. The method returns a promise. When the promise returns you can use the saved request tokens to pull the RestBatchResult from the array. The array contains objects that have their id property set to the result token and it’s result property set to a RestBatchResult. The RestBatchResult has two properties. The status property which is the returned HTTP status, for example, 201 for a successful creation or a 204 for a successful merge. It is up you to interpret the codes. The result property contains the result of the request, if any. A deletion does not return anything for example. However other requests return JSON or XML depending on what the accept header is set to. The code will try to parse the returned string into JSON. If the request returns an error the result will contain the JSON for that. This example basically loops through the results and the saved result tokens and displays a message along with the returned status. </font></p><pre class="brush:js">batchExecutor.executeAsync().done(function (result) var d = result var msg = [];<br /> $.each(result, function (k, v) {<br /> var command = $.grep(commands, function (command) {<br /> return v.id === command.id;<br /> });<br /> if (command.length) {<br /> msg.push("Command--" + command[0].title + "--" + v.result.status);<br /> }<br /> });<br /><br /> alert(msg.join('\r\n'));<br /><br />}).fail(function (err) {<br /> alert(JSON.stringify(err));<br />});<br /></pre><br /><p><img src="https://farm9.staticflickr.com/8677/16335817485_cf755cd5ce_o.png"></p><br /><h4><font size="4" face="Georgia">How Easy is Rest Batching with the RestBatchExecutor?</font></h4><br /><p><font size="3" face="Georgia">So what are some of the things that are easier with the RestBatchExecutor? No more chaining functions and promises together. Your code can be more simpler now. The RestBatchExecutor allows you to write code similar to JSOM by loading requests and then executing one request. The example below shows a loop that creates multiple delete requests and then executes one request.</font></p><pre class="brush:js">var batchExecutor = new RestBatchExecutor(appweburl, { 'X-RequestDigest': $('#__REQUESTDIGEST').val() });<br />var commands = [];<br />var batchRequest;<br />for (x = 100; x <= 133; x++) {<br /> batchRequest = new BatchRequest();<br /> batchRequest.endpoint = updateEndPoint.replace("{0}", x);<br /> batchRequest.headers = { 'IF-MATCH': "*" };<br /> batchRequest.verb = "DELETE";<br /> commands.push({ id: batchExecutor.loadChangeRequest(batchRequest), title: 'update id=' + x });<br />}<br /></pre><br /><p><font size="3" face="Georgia">The combinations of things you can do with REST batching are interesting. For example you could create a new list, write new items to it, then execute a search. It appears you can load any combination of valid REST endpoints and execute them within a batch. </font></p><br /><h4><font size="4" face="Georgia">The Future of REST Batching</font></h4><br /><p><font size="3" face="Georgia">More work needs to be done. The REST Batching does not support the OData specification for failure within a change set. If one fails the others still are executed and/or not rolled back. I am sure it will be long time before we see this capability given the complexity of its implementation. Secondly, there seems to be a hard coded throttling limit of 15 requests within the batch. I found this when testing the code above. That limit is too low for developers doing heavier data work. Even JSOM/CSOM has a higher limit of 30 actions per request. Maybe the RestBatchExecutor could add a ExecuteQueryWithExponentialRetry similar to CSOM. Finally, the Batch capability needs to be implemented on SharePoint on-premises. </font></p><br /><p><font size="3" face="Georgia">The RestBatchExecutor is available on <a href="https://github.com/SteveCurran/sp-rest-batch-execution" target="_blank">GitHub</a>. It still needs more work. If you have suggestions please feel free to contribute.</font></p>
<table><tr><td>
<script type="text/javascript" src="http://sharepointads.com/members/scripts/banner.php?a_aid=110158&a_bid=a4ceed22"></script></td>
<td>
<script type="text/javascript" src="http://sharepointads.com/members/scripts/banner.php?a_aid=110158&a_bid=7755938e"></script></td></tr>
</table> Steve Curranhttp://www.blogger.com/profile/08379275170889570527noreply@blogger.com5tag:blogger.com,1999:blog-3826305938088128320.post-30737811908125832222014-12-29T15:06:00.001-08:002015-01-22T15:11:29.323-08:00Managing Related Items with the SharePoint REST API<div id="scid:0767317B-992E-4b12-91E0-4F059A8CECA8:d3827719-c49b-49ca-a210-5773fe8b23f5" class="wlWriterEditableSmartContent" style="float: none; padding-bottom: 0px; padding-top: 0px; padding-left: 0px; margin: 0px; display: inline; padding-right: 0px">Technorati Tags: <a href="http://technorati.com/tags/SP2013" rel="tag">SP2013</a>,<a href="http://technorati.com/tags/ECM" rel="tag">ECM</a>,<a href="http://technorati.com/tags/Search" rel="tag">Search</a>,<a href="http://technorati.com/tags/REST" rel="tag">REST</a></div> <p>The <strong>“Related Items”</strong> column was introduced in SharePoint 2013. It is a site column that is part of the <strong>Task content type</strong>. The column allows you to link other items to a given task. For example if you are doing an invoice approval workflow you may want to link an image of the invoice to the workflow task. The “Related Items” site column is not available to be added to other content types since it is by default part of the “_Hidden” site column group. Of course you can easily change this as explained in this link <a href="http://sharepoint-community.net/profiles/blogs/utilizing-the-new-related-items-column-via-workflow-part-1" target="_blank">Enable Related Items Column</a> allowing you to use it with other content types. The “Related Items” column is not visible in the new or edit form. It can only be accessed in the view form. This is probably due to the fact that the “Related Items” column has an “Invalid” field type and cannot be modified through traditional remote API list item methods. In this post I will show you how to do basic CRUD operations on the related items of a task item using the SharePoint REST API. The code samples are using the HttpClient with managed code. I was going to use JavaScript but the SharePoint Remote API exposes only static methods for these operations which unfortunately cannot be called across domains. As you read through the post you will be surprised by some of the quirks of the API and what to watch out for.</p> <p><img src="https://farm8.staticflickr.com/7499/16123908765_6a5fe69165_o.png"> </p> <h4>First get Authenticated</h4> <p>All the code examples for managing related items must send a form digest value. The code below shows an example to get this value via the REST API. You could also factor out all of the code for creating the HttpClient object and send this in as a parameter to all the examples.</p><pre class="brush:csharp">public string GetDigest()<br />{<br /> string url = "http://servername/";<br /> HttpClient client = new HttpClient(new HttpClientHandler() { UseDefaultCredentials = true });<br /> client.BaseAddress = new System.Uri(url);<br /> string retVal = null;<br /> string cmd = "_api/contextinfo";<br /> client.DefaultRequestHeaders.Add("Accept", "application/json;odata=verbose");<br /> client.DefaultRequestHeaders.Add("ContentType", "application/json");<br /> client.DefaultRequestHeaders.Add("ContentLength", "0");<br /><br /> try <br /> {<br /> var response = client.PostAsJsonAsync(cmd, "").Result;<br /><br /> if (response.IsSuccessStatusCode)<br /> {<br /> try<br /> {<br /> string content = response.Content.ReadAsStringAsync().Result;<br /> var jss = new JavaScriptSerializer();<br /> var val = jss.Deserialize<dictionary><string object ,>>(content);<br /> var d = val["d"] as Dictionary<string object ,>;<br /> var wi = d["GetContextWebInformation"] as Dictionary<string object ,>;<br /> retVal = wi["FormDigestValue"].ToString();<br /> <br /> }<br /> catch (Exception ex1)<br /> {<br /> System.Diagnostics.Debug.WriteLine(ex1.Message);<br /> <br /> }<br /> <br /> }<br /> <br /> }<br /> catch (Exception ex)<br /> {<br /> System.Diagnostics.Debug.WriteLine(ex.Message);<br /> }<br /> <br /> return retVal;<br />}<br /></pre><br /><h4>Get a Task’s Related Items</h4><br /><p>The code below shows how to retrieve the related items for a given task. All of the methods for the <strong>SP.RelatedItemManager</strong> class are static. Static methods must be called by appending the method name to the class name with a “.” (period) rather than a forward slash. The <strong>GetRelatedItems</strong> method takes two parameters. The first parameter is the <strong>SourceListName</strong>. This can be either the name (title) of the list or the ID (GUID) of the list. The code will run faster if you send in a string representing the ID (GUID). The second parameter is the <strong>SourceItemID</strong>. This is the integer value of the task item’s ID. The server code assumes the source list is in the current web you are making the call from. The method also will return a maximum of 9 related items.</p><pre class="brush:csharp">public async void GetRelatedItems(string digest)<br />{<br /><br /> string url = "http://servername";<br /> HttpClient client = new HttpClient(new HttpClientHandler() { UseDefaultCredentials = true });<br /> client.BaseAddress = new System.Uri(url);<br /> client.DefaultRequestHeaders.Clear();<br /> client.DefaultRequestHeaders.Add("Accept", "application/json;odata=verbose");<br /> client.DefaultRequestHeaders.Add("X-RequestDigest", digest);<br /> client.DefaultRequestHeaders.Add("X-HTTP-Method", "POST");<br /><br /> string json = "{'SourceListName': 'POCreation','SourceItemID': 2}";<br /> client.DefaultRequestHeaders.Add("ContentLength", json.Length.ToString());<br /> try<br /> {<br /> StringContent strContent = new StringContent(json);<br /> strContent.Headers.ContentType = MediaTypeHeaderValue.Parse("application/json;odata=verbose");<br /> HttpResponseMessage response = await client.PostAsync("_api/SP.RelatedItemManager.GetRelatedItems", strContent);<br /><br /> response.EnsureSuccessStatusCode();<br /> if (response.IsSuccessStatusCode)<br /> {<br /> var content = response.Content.ReadAsStringAsync();<br /> }<br /> else<br /> {<br /> var content = response.Content.ReadAsStringAsync();<br /> }<br /> }<br /> catch (Exception ex)<br /> {<br /> System.Diagnostics.Debug.WriteLine(ex.Message);<br /> }<br /><br />}<br /></pre>Below is an example of the json response. You can get this json from the <strong>response.Content.Result</strong> property. The result can be parsed using the Newtonsoft.Json assembly.<br /><img src="https://farm9.staticflickr.com/8582/15953632099_474b4620ae_o.png"><br /><h4>Adding a Related Item</h4><br /><p>When adding a related item to an existing task the API can be confusing. The <strong>AddSingleLinkToUrl</strong> method takes 4 parameters, <strong>SourceListName</strong>, <strong>SourceItemID</strong>, <strong>TargetItemUrl</strong>, and <strong>TryAddReverseLink</strong>. Since I had started experimenting with the AddSingleLinkToUrl first I assumed the source parameters would be the list where the related item was coming from and the target parameters represented the task list I was working with. But of course it is the opposite. Just like in the GetRelatedItems method you can use either the list name or ID (GUID) for the SourceListName. The SourceItemID is the ID of the task list item. The TargetItemUrl is the server relative URL of the item you are adding as a related item. In the code below I am using a document from the Shared Documents library. The final parameter TryAddReverseLink is very interesting and this value is set to true when adding related items using the SharePoint UI. When you set this to true the server side code will check to see if the target list also has a “Related Items” field. If it does then the code will add a json value of the source task item to the target item’s “Related Items” field, thus creating a link between the two items. This does not raise an error if the target list does not have a “Related Items” field. Finally, a few things to be aware of. You will receive an error if the target URL is the URL to the task item itself. So you cannot relate to yourself. Secondly, if the source item already has 9 related items then the server code will try and remove any of the 9 that no longer exist and then add the one your adding. If it cannot remove any of the existing 9 an error is returned.</p><pre class="brush:csharp">public async void AddRelatedItem(string digest)<br />{<br /> string url = "http://servername/";<br /> HttpClient client = new HttpClient(new HttpClientHandler() { UseDefaultCredentials = true });<br /> client.BaseAddress = new System.Uri(url);<br /> client.DefaultRequestHeaders.Clear();<br /> client.DefaultRequestHeaders.Add("Accept", "application/json;odata=verbose");<br /> client.DefaultRequestHeaders.Add("X-RequestDigest", digest);<br /> client.DefaultRequestHeaders.Add("X-HTTP-Method", "POST");<br /><br /> string json = "{'SourceListName':'POCreation','SourceItemID':2,'TargetItemUrl':'/Shared Documents/A1210251607175080419.pdf','TryAddReverseLink':true}";<br /> client.DefaultRequestHeaders.Add("ContentLength", json.Length.ToString());<br /> try<br /> {<br /> StringContent strContent = new StringContent(json);<br /> strContent.Headers.ContentType = MediaTypeHeaderValue.Parse("application/json;odata=verbose");<br /> HttpResponseMessage response = await client.PostAsync("_api/SP.RelatedItemManager.AddSingleLinkToUrl", strContent);<br /><br /><br /> response.EnsureSuccessStatusCode();<br /> if (response.IsSuccessStatusCode)<br /> {<br /> var content = response.Content.ReadAsStringAsync();<br /> }<br /> else<br /> {<br /> var content = response.Content.ReadAsStringAsync();<br /> }<br /> }<br /> catch (Exception ex)<br /> {<br /> System.Diagnostics.Debug.WriteLine(ex.Message);<br /> }<br /><br />}<br /></pre><br /><h4>Removing a Related Item</h4><br /><p>The <strong>DeleteSingleLink</strong> method API has 7 parameters. It is more complicated versus adding a related item. Once again you have the <strong>SourceListName</strong> and <strong>SourceItemID</strong> parameters. But now you have two more parameters <strong>SourceWebUrl</strong> and <strong>TargetWebUrl</strong>. These can be null if both webs are in the same web where the call is being made. If not then they can be set to either an absolute or relative URL. This method also requires the TargetListName and <strong>TargetItemID</strong> parameters which are handled the same way as the source parameters. The final parameter will try and remove the reverse link that may have been created when you added the related item. So it is good idea to set this to true since you do not want to leave any dead end relationships.</p><pre class="brush:csharp">public async void DeleteRelatedItem(string digest)<br />{<br /> string url = "http://servername/";<br /> HttpClient client = new HttpClient(new HttpClientHandler() { UseDefaultCredentials = true });<br /> client.BaseAddress = new System.Uri(url);<br /> client.DefaultRequestHeaders.Clear();<br /> client.DefaultRequestHeaders.Add("Accept", "application/json;odata=verbose");<br /> client.DefaultRequestHeaders.Add("X-RequestDigest", digest);<br /> client.DefaultRequestHeaders.Add("X-HTTP-Method", "POST");<br /><br /> string json = "{'SourceListName':'POCreation','SourceItemID':2,'SourceWebUrl':null,'TargetListName':'Documents','TargetItemID':36,'TargetWebUrl':null,'TryDeleteReverseLink':true}";<br /> <br /> client.DefaultRequestHeaders.Add("ContentLength", json.Length.ToString());<br /> try<br /> {<br /> StringContent strContent = new StringContent(json);<br /> strContent.Headers.ContentType = MediaTypeHeaderValue.Parse("application/json;odata=verbose");<br /> HttpResponseMessage response = await client.PostAsync("_api/SP.RelatedItemManager.DeleteSingleLink", strContent);<br /><br /><br /> response.EnsureSuccessStatusCode();<br /> if (response.IsSuccessStatusCode)<br /> {<br /> var content = response.Content.ReadAsStringAsync();<br /> }<br /> else<br /> {<br /> var content = response.Content.ReadAsStringAsync();<br /> }<br /> }<br /> catch (Exception ex)<br /> {<br /> System.Diagnostics.Debug.WriteLine(ex.Message);<br /> }<br /><br />}<br /></pre><br /><h4>Building Better Relationships</h4><br /><p>There are other methods available on the SP.RelatedItemManager class such as <strong>GetPageOneRealtedItems</strong>. This basically is the same as calling the GetRelatedItems method but only returns the first 4 items. This method is used from the SharePoint UI. The UI will then call GetRelatedItems when you click the “Show More” link. Another available method similar to AddSingleLinkToUrl is <strong>AddSingleLinkFromUrl</strong>. The difference between the two is the assumption of which SPWeb you are making the call from. AddSingleLinkToUrl assumes the current web is the web where the source list is located, and the AddSingleLinkFromUrl method assumes the current web is the web where the target list is located. So depending where you are making the call from determines which method to call. It is possible to create a context menu item allowing users to make a document a related item of task. If you are not sure where the code will be hosted you can just use the <strong>AddSingleLink</strong> method. Similar to the DeleteSingleLink method you must supply more parameters which include the web ids of both the source and target. This allows the server code to relate items across webs. If you want more information on these methods then I recommend you get the <a href="https://visualstudiogallery.msdn.microsoft.com/26a16717-0c9a-4367-8dfd-bb09e7e2deb5" target="_blank">SPRemoteAPIExplorer</a> Visual Studio extension.</p><br /><p>Are these relationships useful? I think so. It allows users to tie together task items with relevant documents located somewhere else. It decouples task data from documents allowing for different projects/tasks to work with the same documents. It may possibly be used by Microsoft’s new Delve to relate items. You can retrieve the related items value in search results by mapping the ows_relateditems crawled property to a managed property. The value is stored in json format. You could search the managed property for an item by using the value “ItemID:20”. This would return all items that are related to an item with a list item ID of 20. But to be exact you would have to search for “{ItemId:20”, “WebId:”d2a04afc-9a05-48c8-a7fa-fa98f9496141”,”ListId”:”e5a04afc-9a05-48c8-a7fa-fa98f9496897”}”. This would be difficult for end users.</p><br /><p>I hope you found this post useful. Knowing how to relate items programmatically can make your workflows much more powerful. There is still much more to be discovered in the SharePoint Remote API, and there is still much more improvement needed.</p>
<table><tr><td>
<script type="text/javascript" src="http://sharepointads.com/members/scripts/banner.php?a_aid=110158&a_bid=a4ceed22"></script></td>
<td>
<script type="text/javascript" src="http://sharepointads.com/members/scripts/banner.php?a_aid=110158&a_bid=7755938e"></script></td></tr>
</table>Steve Curranhttp://www.blogger.com/profile/08379275170889570527noreply@blogger.com2tag:blogger.com,1999:blog-3826305938088128320.post-43869349786750506852014-10-29T19:56:00.001-07:002014-10-30T07:21:27.410-07:00Easy SharePoint App Model Deployment for Web Developers (SPFastDeploy 3.5)<div id="scid:0767317B-992E-4b12-91E0-4F059A8CECA8:428c511e-23da-4c4d-86c5-38643f2ec668" class="wlWriterEditableSmartContent" style="float: none; padding-bottom: 0px; padding-top: 0px; padding-left: 0px; margin: 0px; display: inline; padding-right: 0px">Technorati Tags: <a href="http://technorati.com/tags/SP2013" rel="tag">SP2013</a>,<a href="http://technorati.com/tags/Apps" rel="tag">Apps</a>,<a href="http://technorati.com/tags/Dev" rel="tag">Dev</a>,<a href="http://technorati.com/tags/VS2013" rel="tag">VS2013</a></div> <p>In March of this year I added support to the SPFastDeploy Visual Studio extension to deploy a file to SharePoint when saving. <a href="http://sharepointfieldnotes.blogspot.com/2014/03/sharepoint-2013-app-model-development.html" target="_blank">SPFastDeploy 3.0</a> This turned out to be a popular feature. It also supported deploying the JavaScript files generated by the Typescript compiler when saving a Typescript file. In this blog post I will show you where I have added the same support for CoffeeScript and LESS generated files. I have also added support for the Web Essentials minifying on save feature. Finally I will explain the support for deployng linked files in your solution and some minor bug fixes.</p> <p><a href="https://visualstudiogallery.msdn.microsoft.com/9e03d0f5-f931-4125-a5d1-7c1529554fbd" target="_blank">SPFastDeploy 3.5</a> </p> <h4>CoffeeScript and LESS Support</h4> <p>The Visual Studio Web Essentials extension adds many web development tools. These include Typescript, CoffeeScript and LESS languages. These tools compile the code and generate the related JavaScript and CSS files. SPFastDeploy 3.0 supported deploying the related JavaScript file for Typescript. Version 3.5 supports now supports deploying the related files when saving CoffeeScript and LESS files. The SPFastDeploy extension options have been expanded to include options for each supported language. The category options gives you the ability to define the amount of time to wait and look for the generated related file before timing out. In addition SPFastDeploy supports Web Essentials ability to minify on save. So if you have generated a minified JavaScript or CSS file and have the Web Essentials feature enabled, then SPFastDeploy will look for the related minified version of the related file. Please note it is up to you to have the minified file generated in the same folder as the corresponding non-minified file. SPFastDeploy only looks for the minified file and does not generate it.</p> <p><img src="https://farm8.staticflickr.com/7508/15476076260_056f2378df_o.png"></p> <p><img src="https://farm8.staticflickr.com/7532/15475677327_d72a882beb_o.png"></p> <p><img src="https://farm4.staticflickr.com/3949/15661659085_b8f6514ef9_o.png"></p> <h4>Minify Support</h4> <p>SPFastDeploy 3.5 supports deploying auto minified JavaScript and CSS files not generated by compilers. So if you are just editing and saving JavaScript and CSS files then SPFastDeploy will deploy the minified related file when saving your changes. You will see two new options one for JavaScript and one for CSS. Once again it is up to you generate the minified file in the same folder using Web Essentials. Please note that if you save the file and no changes have been made, then Web Essentials will not generate a new minified file and SPFastDeploy will time out waiting for the new minified file. </p> <p><img src="https://farm4.staticflickr.com/3956/15475007649_3a8e6c3290_b.jpg"></p> <p><img src="https://farm6.staticflickr.com/5608/15475496418_3c1c7e54a1_b.jpg"></p> <h4>Linked File Support</h4> <p>Since Visual Studio 2010 you can add an existing file to a solution by adding a link to that file from another location or solution. SPFastDeploy now supports deploying these types of files when saving from SharePoint app model solutions. Linked files are denoted by the shortcut icon. </p> <p><img src="https://farm8.staticflickr.com/7541/15658958821_d629c4b2a5_o.png"></p> <h4>Bug Fixes</h4> <p>SPFastDeploy 3.5 fixes the bug where you have loaded a JavaScript or CSS file from outside the project and then save it with the “Deploy On Save” option turned on and Visual Studio crashes. This version also fixes the bug where if you change the Site URL property of the SharePoint app project, then SPFastDeploy is not aware of the change and continues to deploy it to the previous Site URL. Previously you had to restart Visual Studio before SPFastDeploy would pick up the change.</p> <h4>Be More Productive with Web Essentials and SPFastDeploy 3.5</h4> <p>Deploying your changes to a SharePoint App automatically when saving makes SharePoint App Model development easy. Now with SPFastDeploy 3.5 you can take this time saving feature and combine it with the web development tools from Web Essentials saving even more time. If you want support for other web development languages such as SWEET.js or SASS then please put your request in at the SPFastDeploy Q/A section of the Visual Studio extensions home page. </p> Steve Curranhttp://www.blogger.com/profile/08379275170889570527noreply@blogger.com1tag:blogger.com,1999:blog-3826305938088128320.post-62357779970421803642014-09-17T10:50:00.001-07:002014-09-17T10:56:46.409-07:00Sharing Documents with the SharePoint REST API<div id="scid:0767317B-992E-4b12-91E0-4F059A8CECA8:8c13c6b1-2794-4000-90e7-cd55199625db" class="wlWriterEditableSmartContent" style="float: none; padding-bottom: 0px; padding-top: 0px; padding-left: 0px; margin: 0px; display: inline; padding-right: 0px">Technorati Tags: <a href="http://technorati.com/tags/SP2013" rel="tag">SP2013</a>,<a href="http://technorati.com/tags/ECM" rel="tag">ECM</a>,<a href="http://technorati.com/tags/REST" rel="tag">REST</a></div> <p>Sharing documents is a pretty basic thing in SharePoint. Most everyone is familiar with the callout action <strong>“Share” </strong>which enables you to share a document with other people. It is a mainstay of collaboration when working with documents in a team. </p> <p><img src="https://farm4.staticflickr.com/3893/15229900935_a173673f5b_o.png"></p> <p>However, there is very little documentation on how to do this through the remote API of SharePoint. In this post I will show how you can do this using the REST API and JavaScript. The call has many arguments which can be confusing. The best description I have found for the UpdateDocumentSharingInformation method is here <a href="http://msdn.microsoft.com/en-us/library/hh631365(v=office.12).aspx" target="_blank">API Description</a>. Just remember when you are sharing a document you are granting permissions. If you want to use this REST API method in a SharePoint app, then make sure you grant the “Web Manage” permissions in your app manifest. When you click the Share action you are presented a dialog to grant either view or edit permissions to multiple users or roles. </p> <p><img src="https://farm4.staticflickr.com/3888/15229900915_77b9f68e92_o.png"></p> <h4>The Code</h4> <p>Below is the REST call using JavaScript code that shares a document from a SharePoint hosted app. </p><pre class="brush:jscript">function shareDocument()<br />{<br /> var hostweburl = decodeURIComponent(getQueryStringParameter('SPHostUrl'));<br /> var appweburl = decodeURIComponent(getQueryStringParameter('SPAppWebUrl'));<br /> var restSource = appweburl + "/_api/SP.Sharing.DocumentSharingManager.UpdateDocumentSharingInfo";<br /><br /> <br /> $.ajax(<br /> {<br /> 'url': restSource,<br /> 'method': 'POST',<br /> 'data': JSON.stringify({<br /> 'resourceAddress': 'http://basesmc15/Shared%20Documents/A1210251607172880165.pdf',<br /> 'userRoleAssignments': [{<br /> '__metadata': {<br /> 'type': 'SP.Sharing.UserRoleAssignment'<br /> },<br /> 'Role': 1,<br /> 'UserId': 'Chris Tester'<br /> }],<br /> 'validateExistingPermissions': false,<br /> 'additiveMode': true,<br /> 'sendServerManagedNotification': false,<br /> 'customMessage': "Please look at the following document",<br /> 'includeAnonymousLinksInNotification': false<br /> }),<br /> 'headers': {<br /> 'accept': 'application/json;odata=verbose',<br /> 'content-type': 'application/json;odata=verbose',<br /> 'X-RequestDigest': $('#__REQUESTDIGEST').val()<br /> },<br /> 'success': function (data) {<br /> var d = data;<br /> },<br /> 'error': function (err) {<br /> alert(JSON.stringify(err));<br /> }<br /> }<br /> );<br /><br />}<br /></pre><br /><h4>The Parameters</h4><br /><p><strong>ResourceAddress</strong>: This is the full URL to the document you want to share</p><br /><p><strong>UserRoleAssignments</strong>: This an array of users and roles that you want to share the document with. The Role property represents which permission you are assigning. 1 = View, 2 = Edit, 3 = Owner, 0 = None. The UserId property can be the name of the user or a role. For example, if you wanted to share the document with the “Translation Mangers” role and the “Steve Tester” user you would use this JSON:</p><pre class="brush:jscript">'userRoleAssignments': [{<br /> '__metadata': {<br /> 'type': 'SP.Sharing.UserRoleAssignment'<br /> },<br /> 'Role': 1,<br /> 'UserId': 'Translation Managers'<br /> },<br /> {<br /> '__metadata': {<br /> 'type': 'SP.Sharing.UserRoleAssignment'<br /> },<br /> 'Role': 1,<br /> 'UserId': 'Steve Tester'<br /> }]<br /></pre><br /><p><strong>ValidateExistingPermissions</strong>: A flag indicating how to honor a requested permission for a user. If this value is "true", SharePoint will not grant the requested permission if a user already has sufficient permissions, and if this value is "false", then SharePoint will grant the requested permission whether or not a user already has the same or more permissions. This parameter only applies when the <em><strong>additiveMode</strong></em> parameter is set to true.</p><br /><p><strong>AdditiveMode</strong>:A flag indicating whether the permission setting uses the additive or strict mode. If this value is "true", the permission setting uses the additive mode, which means that the specified permission will be added to the user’s current list of permissions if it is not there already, and if this value is "false", the permission setting uses the strict mode, which means that the specified permission will replace the user’s current permissions. This parameter is useful when you want to stop sharing a document with a person or group. In this case you would set AdditiveMode to false using the Role = 0.</p><br /><p><strong>SendServerManagedNotification</strong>: A flag to indicate whether or not to generate an email notification to each recipient in the <strong>userRoleAssignments </strong> array after the document update is completed successfully. If this value is "true", then SharePoint will send an email notification if an email server is configured, and if the value is "false", no email notification will be sent.</p><br /><p><strong>CustomMessage</strong>: A custom message to be sent in the body of the email.</p><br /><p><strong>IncludeAnonymousLinksInNotification</strong>: A flag that indicates whether or not to include anonymous access links in the email notification to each recipient in the <strong>userRoleAssignments</strong> array after the document update is completed successfully. If the value is "true", the SharePoint will include an anonymous access link in the email notification, and if the value is "false", no link will be included. This is useful if you are sharing the document with an external user. You must be running this code with full control or as a Site Owner if you want to share the document with external users.</p><br /><h4>The Results</h4><br /><p>After calling the above code you will receive a result for every user or role you have shared the document with. The code does not return an error and you must examine the results to determine success. Check the Status property. If this is false typically there will be a message in the Message property explaining the problem. It also tells you whether the user is known. If the user is not known then it is considered an external user. </p><br /><p><img src="https://farm4.staticflickr.com/3876/15082783527_717c2b760d_o.png"></p><br /><h4>Resting and Sharing</h4><br /><p>As you can see there is a lot more to sharing a document versus what is presented in the SharePoint UI. The UpdateDocumentSharingInfo method has many options. You can use this in your custom SharePoint apps to build more robust sharing of documents which could include the option of a custom email message or bulk sharing of documents. This could also be used to stop sharing a document. I have yet to find an easy way to stop sharing a document using the SharePoint UI. </p> Steve Curranhttp://www.blogger.com/profile/08379275170889570527noreply@blogger.com13tag:blogger.com,1999:blog-3826305938088128320.post-33787493748342389512014-08-24T15:11:00.001-07:002014-08-25T06:56:41.462-07:00Using the HttpClient Class with SharePoint 2013 REST API<div id="scid:0767317B-992E-4b12-91E0-4F059A8CECA8:d17a2589-bc7c-4b67-be5d-5e8b084cf731" class="wlWriterEditableSmartContent" style="float: none; padding-bottom: 0px; padding-top: 0px; padding-left: 0px; margin: 0px; display: inline; padding-right: 0px">Technorati Tags: <a href="http://technorati.com/tags/SP2013" rel="tag">SP2013</a>,<a href="http://technorati.com/tags/REST" rel="tag">REST</a></div> <p>The System.Net.Http.HttpClient class is new in .Net framework 4.5 and was introduced under the ASP.Net Web API. The class and has many methods that support asynchronous programming and is the best choice for writing client apps that make HTTP requests. Compared to the traditional System.Net.HttpWebRequest class not only does the HttpClient class have more options it also has extension methods such as <strong>PostAsJsonAsync</strong> and <strong>PostAsXmlAsync</strong> methods which are available in <strong>System.Net.Http.Formatting </strong>assembly shipped with ASP.Net MVC 4 framework. In this post I am going to give you a tip on how to successfully post to the SharePoint REST API using the HttpClient class. There are many examples on how to post to the SharePoint REST API using the HttpWebRequest class but hardly any using the HttpClient class. If you are not careful you would think that the HttpClient class did not work with the SharePoint REST API. </p> <h4>I Keep Getting a 400 Bad Request Message when Posting</h4> <p>Below is a small example on how to use the HttpClient to create a folder in SharePoint 2013. The key to success is setting the Content-Type header correctly when posting. If you have used the REST API you know you must set the Content-Type header to “application/json; odata=verbose”. If you don’t you will get a<strong> “400 Bad Request”</strong> error. You can use the HttpClient.DefaultRequestHeaders collection to add headers but when trying to add the “<strong>Content-Type”</strong> header the collection will throw an <strong>“InvalidOperationException”</strong> with this message <strong>“{"Misused header name. Make sure request headers are used with HttpRequestMessage, response headers with HttpResponseMessage, and content headers with HttpContent objects."}”. </strong>So ok I must not be setting the content-type correctly on the HttpContent object. The StringContent class is what you are supposed to use as an argument when calling the HttpClient.PostAsync method. Looking at the StringContent class your first inclination is to use the constructor and give it the json that you want to post. The constructor takes the json, encoding type and media type as arguments. The media type corresponds to the content-type.</p><pre class="brush:csharp">StringContent strContent = new StringContent(json, System.Text.Encoding.UTF8, "application/json;odata=verbose");<br /></pre><br /><p>Unfortunately sending <strong>“application/json;odata=verbose”</strong> as the media type argument causes a<strong> “FormatException”</strong> with the message <strong>{"The format of value 'application/json;odata=verbose' is invalid."}</strong>. If you just use<strong> “application/json” </strong>you will receive a<strong> “400 bad request”</strong> error because the <strong>“odata=verbose”</strong> is missing. So how do you get around this. First of all you must create the StringContent object with the json as the only argument to the constructor and then set the StringContent.Headers.ContentType property to <strong>“application/json;odata=verbose”</strong> using the MediaTypeHeaderValue.Parse method. </p><pre class="brush:csharp">StringContent strContent = new StringContent(json); <br />strContent.Headers.ContentType = MediaTypeHeaderValue.Parse("application/json;odata=verbose");<br /></pre><br /><p>Mystery solved.</p><pre class="brush:csharp">private void CreateFolder(HttpClient client, string digest)<br />{<br /> client.DefaultRequestHeaders.Clear();<br /> client.DefaultRequestHeaders.Add("Accept", "application/json;odata=verbose");<br /> client.DefaultRequestHeaders.Add("X-RequestDigest", digest);<br /> client.DefaultRequestHeaders.Add("X-HTTP-Method", "POST");<br /> <br /> string json = "{'__metadata': { 'type': 'SP.Folder' }, 'ServerRelativeUrl': '/shared documents/folderhttp1'}";<br /> client.DefaultRequestHeaders.Add("ContentLength", json.Length.ToString());<br /> try<br /> {<br /> StringContent strContent = new StringContent(json); <br /> strContent.Headers.ContentType = MediaTypeHeaderValue.Parse("application/json;odata=verbose");<br /> HttpResponseMessage response = client.PostAsync("_api/web/folders", strContent).Result;<br /> <br /> response.EnsureSuccessStatusCode();<br /> if (response.IsSuccessStatusCode)<br /> {<br /> var content = response.Content.ReadAsStringAsync(); <br /> }<br /> else<br /> { <br /> var content = response.Content.ReadAsStringAsync(); <br /> }<br /> }<br /> catch (Exception ex)<br /> {<br /> System.Diagnostics.Debug.WriteLine(ex.Message);<br /> }<br /> <br />}<br /></pre><br /><h4>HttpClient is Here</h4><br /><p>HttpClient is a modern HTTP client for .NET. It provides a flexible and extensible API for accessing all things exposed through HTTP. You should use it instead of the HttpWebRequest. You can read more about it here <a href="http://msdn.microsoft.com/en-us/library/system.net.http(v=vs.118).aspx" target="_blank">System.Net.Http</a>. Another great source of information when using the HttpClient with SharePoint REST is Dennis RedField’s blog <a href="http://dlr2008.wordpress.com/2013/10/31/sharepoint-2013-rest-api-the-c-connection-part-1-using-system-net-http-httpclient/" target="_blank">Cloud 2013 or Bust</a>. This blog has an in depth 4 part series on how to use the HttpClient with SharePoint REST. Changing our habits as developers can be a slow process. However, some new APIs can be confusing especially when used against SharePoint 2013. SharePoint 2013 is not fully OData compliant yet and has some quirks, namely content-type checking. I hope this tip can save you some time.</p> Steve Curranhttp://www.blogger.com/profile/08379275170889570527noreply@blogger.com4tag:blogger.com,1999:blog-3826305938088128320.post-67247715377604949462014-07-31T16:08:00.000-07:002014-08-02T16:33:12.518-07:00Uploading Large Documents into SharePoint Online with REST,CSOM, and RPC using C#<div id="scid:0767317B-992E-4b12-91E0-4F059A8CECA8:1fe0fe32-ca12-40c3-b61d-47e77fbbc3d1" class="wlWriterEditableSmartContent" style="float: none; padding-bottom: 0px; padding-top: 0px; padding-left: 0px; margin: 0px; display: inline; padding-right: 0px">Technorati Tags: <a href="http://technorati.com/tags/SP2013" rel="tag">SP2013</a>,<a href="http://technorati.com/tags/REST" rel="tag">REST</a>,<a href="http://technorati.com/tags/O365" rel="tag">O365</a></div> <p>There are many articles that give great examples on how to upload documents to SharePoint Online using jQuery and REST. These are useful to get around the message size limitation of use CSOM/JSOM when uploading documents. This message size limitation is not configurable in SharePoint Online. There are few examples on how to upload large documents using C#. In this blog post I will show you how to use C# and the SharePoint REST, Managed CSOM and RPC to upload large documents (up to 2GB) to SharePoint Online. There are a few things you need to take care of to get all these to work with SharePoint Online. </p> <h4>Credentials and Cookie Containers</h4> <p>In the code examples below both REST and RPC use the HttpWebRequest class to communicate with SharePoint. When using this class from C# you must set the Credentials and the CookieContainer properties of the HttpWebRequest object. The following helper methods creates the Microsoft.SharePoint.Client.SharePointOnlineCredentials and gets the System.Net.CookieContainer for the SharePointOnlineCredentials.</p><pre class="brush:csharp">public static class Utils<br />{<br /><br /> public static CookieContainer GetO365CookieContainer(SharePointOnlineCredentials credentials, string targetSiteUrl)<br /> {<br /><br /> Uri targetSite = new Uri(targetSiteUrl);<br /> string cookieString = credentials.GetAuthenticationCookie(targetSite);<br /> CookieContainer container = new CookieContainer();<br /> string trimmedCookie = cookieString.TrimStart("SPOIDCRL=".ToCharArray());<br /> container.Add(new Cookie("FedAuth", trimmedCookie, string.Empty, targetSite.Authority));<br /> return container;<br /><br /><br /> }<br /><br /> public static SharePointOnlineCredentials GetO365Credentials(string userName, string passWord)<br /> {<br /> SecureString securePassWord = new SecureString();<br /> foreach (char c in passWord.ToCharArray()) securePassWord.AppendChar(c);<br /> SharePointOnlineCredentials credentials = new SharePointOnlineCredentials(userName, securePassWord);<br /> return credentials;<br /> }<br /><br /><br /><br />}<br /></pre><br /><h4>Uploading Large Documents With REST</h4><br /><p>The following code takes the site URL, document library title, and a file path to a local file and adds the file to the root folder collection of the site. If you want to use folders you can modify this code to handle it. The REST call requires a form digest value to be set so I have included the code that makes a REST call to the contextinfo to get it. Please make sure to set the time out on the HttpWebRequest to about 10 minutes because large files will exceed the default time out of 100 seconds. 10 minutes should be adequate to cover the unpredictable upload speeds of ISP’s and SharePoint Online.</p><pre class="brush:csharp">public static void UploadRest(string siteUrl, string libraryName, string filePath)<br />{<br /> byte[] binary = IO.File.ReadAllBytes(filePath); ;<br /> string fname = IO.Path.GetFileName(filePath);<br /> string result = string.Empty;<br /> string resourceUrl = siteUrl + "/_api/web/lists/getbytitle('" + libraryName + "')/rootfolder/files/add(url='" + fname + "',overwrite=true)";<br /><br /> HttpWebRequest wreq = HttpWebRequest.Create(resourceUrl) as HttpWebRequest;<br /> wreq.UseDefaultCredentials = false;<br /> SharePointOnlineCredentials credentials = Utils.GetO365Credentials("your login", "your password");<br /> wreq.Credentials = credentials;<br /> wreq.CookieContainer = Utils.GetO365CookieContainer(credentials, siteUrl);<br /><br /> string formDigest = GetFormDigest(siteUrl, credentials, wreq.CookieContainer);<br /> wreq.Headers.Add("X-RequestDigest", formDigest);<br /> wreq.Method = "POST";<br /> wreq.Timeout = 1000000;<br /> wreq.Accept = "application/json; odata=verbose";<br /> wreq.ContentLength = binary.Length;<br /> <br /><br /> using (IO.Stream requestStream = wreq.GetRequestStream())<br /> { <br /> requestStream.Write(binary, 0, binary.Length);<br /> }<br /><br /> WebResponse wresp = wreq.GetResponse();<br /> using (IO.StreamReader sr = new IO.StreamReader(wresp.GetResponseStream()))<br /> {<br /> result = sr.ReadToEnd();<br /> }<br /><br /><br />}<br />public static string GetFormDigest(string siteUrl, ICredentials credentials, CookieContainer cc)<br />{<br /> string formDigest = null;<br /><br /> string resourceUrl = siteUrl +"/_api/contextinfo";<br /> HttpWebRequest wreq = HttpWebRequest.Create(resourceUrl) as HttpWebRequest;<br /><br /> wreq.Credentials = credentials;<br /> wreq.CookieContainer = cc;<br /> wreq.Method = "POST";<br /> wreq.Accept = "application/json;odata=verbose";<br /> wreq.ContentLength = 0;<br /> wreq.ContentType = "application/json";<br /> string result;<br /> WebResponse wresp = wreq.GetResponse();<br /> <br /> using (IO.StreamReader sr = new IO.StreamReader(wresp.GetResponseStream()))<br /> {<br /> result = sr.ReadToEnd();<br /> }<br /><br /> var jss = new JavaScriptSerializer();<br /> var val = jss.Deserialize<dictionary><string ,OBJECT>>(result);<br /> var d = val["d"] as Dictionary<string object ,>;<br /> var wi = d["GetContextWebInformation"] as Dictionary<string object ,>;<br /> formDigest = wi["FormDigestValue"].ToString();<br /><br /> return formDigest;<br /> <br />}<br /></pre><br /><h4>Uploading Large Documents with CSOM</h4><br /><p>At one time I thought you could not do this with CSOM, however fellow MVP Joris Poelmans brought to my attention that the AMS sample Core.LargeFileUpload was able to upload over 3 mb files <a href="https://github.com/OfficeDev/PnP" target="_blank">O365 Development Patterns and Practices</a>. This can only be done if you are setting the FileCreationInfo ContentStream property with an open stream to the file. This gets around the message size limit of CSOM because the ContentStream is using the MTOM optimizations and sending the raw binary rather than a base64 encoded binary. This is much more efficient and is faster that the other methods. This appears to be a later change in CSOM and optimized for SharePoint Online. The CSOM code does not need a cookie container. I also tried using File.SaveBinaryDirect method but I received <strong>“Cannot Invoke HTTP Dav Request”</strong> since this is not supported in SharePoint Online.</p><pre class="brush:csharp"> public static void UploadDocumentContentStream(string siteUrl, string libraryName, string filePath)<br /> {<br /> ClientContext ctx = new ClientContext(siteUrl);<br /> ctx.RequestTimeout = 1000000;<br /> ctx.Credentials = Utils.GetO365Credentials("your login", "your password");<br /> Web web = ctx.Web;<br /><br /> using (IO.FileStream fs = new IO.FileStream(filePath, IO.FileMode.Open))<br /> {<br /> FileCreationInformation flciNewFile = new FileCreationInformation();<br /><br /> // This is the key difference for the first case - using ContentStream property<br /> flciNewFile.ContentStream = fs;<br /> flciNewFile.Url = IO.Path.GetFileName(filePath);<br /> flciNewFile.Overwrite = true;<br /> <br /><br /> List docs = web.Lists.GetByTitle(libraryName);<br /> Microsoft.SharePoint.Client.File uploadFile = docs.RootFolder.Files.Add(flciNewFile);<br /><br /> ctx.Load(uploadFile);<br /> ctx.ExecuteQuery();<br /> }<br /> }<br /></pre><br /><h4>Uploading Large Documents with RPC</h4><br /><p>RPC still lives and is supported in SharePoint Online. The code below is simplified. RPC can be hard to understand because the syntax for the different parameters is from years ago. RPC is basically an HTTP POST to C++ dll. It can be fast but it was not faster than CSOM. The parameters and binary must be combined and separated by a line feed into one common byte array before posting. The libraryName parameter cannot be the title of document library but the actual URL for it. Instead of Documents you must use Shared Documents. You will note many of the parameters are URL Encoded because RPC is very particular about characters in the URL. Finally, note that the code feeds the byte array to the request stream in chunks. This helps prevent triggering of SharePoint Online throttling limits. </p><pre class="brush:csharp"> public static void UploadDocumentRPC(string siteUrl, string libraryName, string filePath)<br /> {<br /> string method = HttpUtility.UrlEncode("put document:14.0.2.5420");<br /> string serviceName = HttpUtility.UrlEncode(siteUrl);<br /> string document = HttpUtility.UrlEncode(libraryName + "/" + IO.Path.GetFileName(filePath));<br /> string metaInfo = string.Empty;<br /> string putOption = "overwrite";<br /> string keepCheckedOutOption = "false";<br /> string putComment = string.Empty;<br /> string result = string.Empty;<br /> <br /> string fpRPCCallStr = "method={0}&service_name={1}&document=[document_name={2};meta_info=[{3}]]&put_option={4}&comment={5}&keep_checked_out={6}";<br /> fpRPCCallStr = String.Format(fpRPCCallStr, method, serviceName, document, metaInfo, putOption, putComment, keepCheckedOutOption);<br /><br /> byte[] fpRPCCall = System.Text.Encoding.UTF8.GetBytes(fpRPCCallStr + "\n");<br /> byte[] postData = IO.File.ReadAllBytes(filePath);<br /> byte[] data;<br /><br /> if (postData != null && postData.Length > 0)<br /> {<br /> data = new byte[fpRPCCall.Length + postData.Length];<br /> fpRPCCall.CopyTo(data, 0);<br /> postData.CopyTo(data, fpRPCCall.Length);<br /> }<br /> else<br /> {<br /> data = new byte[fpRPCCall.Length];<br /> fpRPCCall.CopyTo(data, 0);<br /> }<br /> <br /> HttpWebRequest wReq = WebRequest.Create(siteUrl + "/_vti_bin/_vti_aut/author.dll" ) as HttpWebRequest;<br /> SharePointOnlineCredentials credentials = Utils.GetO365Credentials("your login", "your password");<br /> wReq.Credentials = credentials;<br /> wReq.CookieContainer = Utils.GetO365CookieContainer(credentials, siteUrl);<br /> wReq.Method="POST";<br /> wReq.Timeout = 1000000;<br /> wReq.ContentType="application/x-vermeer-urlencoded";<br /> wReq.Headers.Add("X-Vermeer-Content-Type", "application/x-vermeer-urlencoded");<br /> wReq.ContentLength=data.Length;<br /><br /> using (IO.Stream requestStream = wReq.GetRequestStream())<br /> {<br /> int chunkSize = 2097152;<br /> int tailSize;<br /> int chunkNum = Math.DivRem(data.Length, chunkSize, out tailSize);<br /><br /> for (int i = 0; i < chunkNum; i++)<br /> {<br /> requestStream.Write(data, chunkSize * i, chunkSize);<br /> }<br /><br /> if (tailSize > 0)<br /> requestStream.Write(data, chunkSize * chunkNum, tailSize);<br /><br /> }<br /><br /> WebResponse wresp = wReq.GetResponse();<br /> using (IO.StreamReader sr = new IO.StreamReader(wresp.GetResponseStream()))<br /> {<br /> result = sr.ReadToEnd();<br /> }<br /> <br /> }<br /></pre><br /><h4>Three Ways of Uploading Large Documents to SharePoint Online</h4><br /><p>All of the above code examples are good ways to upload large documents to SharePoint Online. All of them utilize the Client Object Model to create the credentials and cookie that is required for SharePoint Online. Getting the cookie is rather complicated without using the Client Object Model. All three methods require that you set the request timeout to a large value because uploading to SharePoint Online is much slower than SharePoint On-Premises. Experiment with the code samples. I tested these with 200mb files and the CSOM was the fastest but your results may vary. I like variety and having multiple ways of accomplishing a task.</p> Steve Curranhttp://www.blogger.com/profile/08379275170889570527noreply@blogger.com12