Many developers new to SharePoint want to know how to get content into SharePoint from another application. This application is usually a remote application either running in a asp.net application or a desktop application. The developers may be familiar with the SharePoint object model and how to use it to put content into SharePoint. However, when it comes to doing this remotely there seems to be a lot of confusion. This confusion is brought on by the fact that there are numerous out of the box ways of doing this. You can put content into SharePoint by using web services, WebDav or frontpage remote procedure calls. The problem arises when developers chose a method and find out that certain methods don’t support certain functions that you would normally see either from the SharePoint UI or from the SharePoint object model. This article will give you a brief description of the methods available for remotely putting content into SharePoint and compare the methods based on certain factors you should be aware of as a SharePoint developer. These factors include complexity, folder creation, scalability and indexing. Complexity and scalability are rated on a scale of 1 through 10.
Method | Complexity | Scalability | Indexing | Folder Creation |
Copy.asmx | 5 | 4 | yes | yes* |
WebDav | 2 | 5 | yes* | yes* |
Rpc | 10 | 10 | yes | yes |
* must be used in conjunction with Lists.asmx web service
Copy Web Service
To create content remotely the copy web service is probably your best bet. The copy web service enables you to create new documents and send the metadata for indexing in one call. This makes the web service more scalable. Many times users want to create a new folder to store their documents. Unfortunately, the copy web service does not have a method for creating a folder. The following is a code snippet for creating new content in SharePoint via the copy webs service:
public static void CreateNewDocumentWithCopyService(string fileName)
{
copyservice.Copy c = new copyservice.Copy();
c.Url = "http://servername/sitename/_vti_bin/copy.asmx";
c.UseDefaultCredentials = true;
byte[] myBinary = File.ReadAllBytes(fileName);
string destination = http://servername/sitename/doclibrary/ + Path.GetFileName(fileName);
string[] destinationUrl = { destination };
copyservice.FieldInformation info1 = new copyservice.FieldInformation();
info1.DisplayName = "Title";
info1.InternalName = "Title";
info1.Type = copyservice.FieldType.Text;
info1.Value = "new title";
copyservice.FieldInformation info2 = new copyservice.FieldInformation();
info2.DisplayName = "Modified By";
info2.InternalName = "Editor";
info2.Type = copyservice.FieldType.User;
info2.Value = "-1;#servername\\testmoss";
copyservice.FieldInformation[] info = { info1, info2 };
copyservice.CopyResult resultTest = new copyservice.CopyResult();
copyservice.CopyResult[] result = { resultTest };
try
{
//When creating new content use the same URL in the SourceURI as in the Destination URL argument
c.CopyIntoItems(destination, destinationUrl, info, myBinary, out result);
}
catch (Exception ex)
{
}
}
The benefits of using the copy web service is that it is simple to code against. Most developers are familiar with the web service programming. Also the copy web service supports creating content and metadata in one call thus not creating multiple versions. Unfortunately, this method suffers from one problem and that is the use of byte arrays. If you plan on uploading large files lets say in excess of 2mb, then chances are you will receive sporadic “out of memory” errors. It may not happen on your development server but may happen on your production server. This is because the windows OS needs to allocate byte arrays with contiguous memory. If the server’s memory is fragmented (has a lot of available memory but not much contiguous memory) then you will receive this error. Thus, the copy web service is not very scalable. Finally, web services tend to be verbose given their soap protocol and the marshalling from string to native types makes them slower than other methods.
WebDav
Most developers are familiar with WebDav because it is used to display document libraries in Windows Explorer. Here the familiar dragging and dropping of files into SharePoint can be accomplished. You can accomplish the same thing by using the System.Net.WebClient class as follows:
public static void UploadFile(string fileName, string destination)
{
WebClient wc = new WebClient();
wc.UseDefaultCredentials = true;
byte[] response = wc.UploadFile(destination + Path.GetFileName(fileName) , "PUT", fileName);
}
Ok this seems simple enough. As you can see it is not as complex as using the Copy web service. However, it does not support sending any metadata long with the file content. This can be a major problem if the document library has multiple content types, so the new file will be put into the document library with the default content type. Another big issue is if the default content type has required fields. The file will remain checked out until the fields are populated. This prevents other users from seeing the document or from being returned in any searches. It is a great solution if you are just bulk migrating data from an external data store to SharePoint. You more than likely will have to do extra work afterwards. Adding metadata after uploading will also cause the creation of extra versions of the document being created unnecessarily. The fact that it does not use the soap protocol but straight http makes it more scalable than the copy web service. Unfortunately, it still suffers from the fact that it uses a byte array to upload the file. So sooner or later you will run into “out of memory “ exceptions. So how can I create a folder before using WebDav? You can use the lists web service to accomplish this:
public static XmlNode UpdateListItemCreateFolder(string docLibraryName, string folderName)
{
listservice.Lists listProxy = new listservice.Lists();
string xmlFolder = "<Batch OnError='Continue'><Method ID='1' Cmd='New'><Field Name='ID'>New</Field><Field Name='FSObjType'>1</Field><Field Name='BaseName'>” + folderName + “</Field></Method></Batch>";
XmlDocument doc = new XmlDocument();
doc.LoadXml(xmlFolder);
XmlNode batchNode = doc.SelectSingleNode("//Batch");
listProxy.Url = "http://servername/sitename/_vti_bin/lists.asmx";
listProxy.UseDefaultCredentials = true;
XmlNode resultNode = listProxy.UpdateListItems(docLibraryName, batchNode);
return resultNode;
}
FrontPage RPC (Remote Procedure Calls)
Most developers are not familiar with RPC and what it can do. The complexity of coding RPC is high due to the fact that construction of commands and the interpreting of responses can be tedious and error prone. However this method proves to be the most scalable and the fastest. It also supports sending both the content and the metadata in one call. RPC has numerous commands including one for creating folders and it supports the use of streams rather than a byte array. Below is sample code to create a new document in SharePoint using RPC with a stream.
public static void CreateDocumentRPC(string name, string docLib, string title, bool overWrite, Stream fileBinary)
{
string method = "put document: 12.0.0.4518";
string serviceName = "http://servername/sitename/_vti_bin/_vti_aut/author.dll";
string document = docLib + "/" + name;
string metaInfo = string.Empty;
string putOption = overWrite ? "overwrite" : "edit";
string keepCheckedOutOption = "false";
string comment = string.Empty;
string returnStr = string.Empty;
byte[] data;
string returnError = string.Empty;
string fpRPCCallStr = "method={0}&service_name={1}&document=[document_name={2};meta_info=[{3}]]&put_option={4}&comment={5}&keep_checked_out={6}";
method = HttpUtility.UrlEncode(method);
putOption = HttpUtility.UrlEncode(putOption);
metaInfo = "vti_title;SW|" + title;
fpRPCCallStr = String.Format(fpRPCCallStr, method, serviceName, document, metaInfo, putOption, comment, keepCheckedOutOption);
try
{
//add line feed character to delimit end of command
byte[] fpRPCCall = System.Text.Encoding.UTF8.GetBytes(fpRPCCallStr + "\n");
data = new byte[fpRPCCall.Length];
fpRPCCall.CopyTo(data, 0);
HttpWebRequest wReq = WebRequest.Create(serviceName) as HttpWebRequest;
wReq.Credentials = System.Net.CredentialCache.DefaultCredentials;
wReq.Method = "POST";
wReq.ContentType = "application/x-vermeer-urlencoded";
wReq.Headers.Add("X-Vermeer-Content-Type", "application/x-vermeer-urlencoded");
wReq.ContentLength = fpRPCCall.Length + fileBinary.Length;
using (Stream requestStream = wReq.GetRequestStream())
{
requestStream.Write(fpRPCCall, 0, fpRPCCall.Length);
byte[] tmpData = null;
int bytesRead = 0;
int chunkSize = 2097152;
int tailSize;
int chunkNum = Math.DivRem((int)fileBinary.Length, chunkSize, out tailSize);
//chunk the binary directly from the stream to buffer.
for (int i = 0; i < chunkNum; i++)
{
data = new byte[chunkSize];
bytesRead = fileBinary.Read(tmpData, 0, chunkSize);
requestStream.Write(data, 0, chunkSize);
}
//send the remainde if any.
if (tailSize > 0)
{
data = new byte[tailSize];
bytesRead = fileBinary.Read(data, 0, tailSize);
requestStream.Write(data, 0, tailSize);
}
//Now get the response from the server
WebResponse response = wReq.GetResponse();
int num2,num3;
long contentLength = response.ContentLength;
bool noLength = false;
if (contentLength == -1)
{
noLength = true;
contentLength = chunkSize;
}
byte[] returnBuffer = new byte[(int) contentLength];
using (Stream responseStream = response.GetResponseStream())
{
num3 = 0;
do
{
num2 = responseStream.Read(returnBuffer, num3, ((int) contentLength) - num3);
num3 += num2;
if (noLength && (num3 == contentLength))
{
contentLength += chunkSize;
byte[] buffer2 = new byte[(int) contentLength];
Buffer.BlockCopy(returnBuffer, 0, buffer2, 0, num3);
returnBuffer = buffer2;
}
}
while (num2 != 0);
}
if (noLength)
{
byte[] buffer3 = new byte[num3];
Buffer.BlockCopy(returnBuffer, 0, buffer3, 0, num3);
returnBuffer = buffer3;
}
returnStr = Encoding.UTF8.GetString(returnBuffer);
}
}
catch (Exception ex)
{
//error handling
}
}
As you can see the complexity of coding against rpc can be daunting. You can refactor this code into something much more reusable. Parsing of the return response can be a bit strange also. Below is an example of a successful document creation response from the SharePoint server:
<html><head><title>vermeer RPC packet</title></head>
<body>
<p>method=put document:12.0.0.4518
<p>message=successfully put document 'tester2/crpc.png' as 'tester2/crpc.png'
<p>document=
<ul>
<li>document_name=tester2/crpc.png
<li>meta_info=
<ul>
<li>vti_rtag
<li>SW|rt:61935CFA-736B-4311-97AA-E745777CC94A@00000000001
<li>vti_etag
<li>SW|"{61935CFA-736B-4311-97AA-E745777CC94A},1"
<li>vti_filesize
<li>IR|1295
<li>vti_parserversion
<li>SR|12.0.0.6318
<li>vti_modifiedby
<li>SR|BASESMCDEV2\test.moss
<li>vti_timecreated
<li>TR|19 May 2009 17:28:35 -0000
<li>vti_title
<li>SW|wackout
<li>vti_lastheight
<li>IX|78
<li>ContentTypeId
<li>SW|0x010100B1C4E676904AB94BA76515774B23E02D
<li>vti_timelastmodified
<li>TR|19 May 2009 17:28:35 -0000
<li>vti_lastwidth
<li>IX|411
<li>vti_author
<li>SR|BASESMCDEV2\test.moss
<li>vti_sourcecontrolversion
<li>SR|V1.0
<li>vti_sourcecontrolcookie
<li>SR|fp_internal
</ul>
</ul>
</body>
</html>
If there is not substring of “osstatus” or “Windows SharePoint Services Error” then it is a success.
I hope this helps in your evaluation of different ways you can upload data to SharePoint. There are many things to consider when selecting which method to use. High on your priority list should be speed, scalability and the ability to send metadata. Sometimes the method that looks the hardest can be the best choice and it is up to you as a developer to abstract away the complexity.
14 comments:
Nice overview! Very helpful. Thanks, Steve!
Thanks..! good contents.
Hi,
How to do folder operations using VBA?
fso doesn't work!!!
Same question: I'd like to execute a VBA macro to upload files into our company sharepoint site
Very helpful indeed
Excellent article, it really helped me out! Just one sidenote:
for (int i = 0; i < chunkNum; i++)
{
data = new byte[chunkSize];
bytesRead = fileBinary.Read(tmpData, 0, chunkSize);
requestStream.Write(data, 0, chunkSize);
}
I believe it's meant to be:
bytesRead = fileBinary.Read(data, 0, chunkSize);
This way it works with larger files.
Hi Steve,
Great article with sound content
Thanks,
Adi
Thanks Steve, this is very helpful for a project I have just begun working.
- Steve F
Very helpful post, thanks!
I intended to reuse RPC function and modify it in order to set AllowWriteStreamBuffering = false. Objective is to avoid creating a memory buffer on client machine prior to send data so we do not get out of memory exceptions when loading big files.
Unfortunately all my attempts so far ended with an exception message when setting this property where we get response from server in the below line
//Now get the response from the server
WebResponse response = wReq.GetResponse();
It throws the below exception
"This request requires buffering data to succeed."
Any chance that you guys got same problem?
Regards,
Richard
You need to leave AllowWriteStreamBuffer = true. The following code:
bytesRead = fileBinary.Read(tmpData, 0, chunkSize);
Should be
bytesRead = fileBinary.Read(data, 0, chunkSize);
This should allow large file uploads.
Hi Steve, I am facing a problem with the CopyIntoItems Web Service. I am developing an iOS application which should be capable of uploading files to any SharePoint site (specially SP 2010). With the CopyIntoItems web service (SOAP 1.2), I am able to upload files consistently till 40KB. If the file size crosses that, it becomes inconsistent (till 100KB), files more than 100KB are failing all the time. I am sending the file data as base64 encoded. When the error happens, I get the underlying error in HTTP request that 'Server has closed the connection'. Even with WireShark, it shows that after initial 1 or 2 request blocks, server rejects the request. I have tried with two different SharePoint sites, both behaves same way. However I am able to upload upto 50MB files to those SP sites using browser. As I am using an iOS code, I do not have the access to .Net classes and hence my request is a simple http request. As per your reply at http://splashurl.com/qat93j4 it seems CopyIntoItems has this problem inbuilt, but then how the applications like SharePlus is able to upload any size files to any SP server (i.e. they do not have any server side handlers)?
I am getting below error on below line
bytesRead = fileBinary.Read(data, 0, tailSize);
Exception: The request was aborted: The request was canceled.
InnerException: Cannot close stream until all bytes are written.
Excellent article, thank you. I am trying to compare and contrast Copy.CopyIntoItems vs. OfficialFile.SubmitFile - not a lot of info as to why one over the other. I am surprised that the OfficialFile.SubmitFile is absent in your review. Perhaps OfficialFile is newer than your article?
Hi Steve,excellent information..,my requirement is we need to oupload the file through java script for that we are trying to idnentify the restapi for share point 2010, we could see few service for version 2013,is 2010 we don't have rest service which will taken care of uploading the file ,could you please help me out Thanks!
Post a Comment