Tag Archives: library

It Looks Like You’re Building a Large Library. Would you Like Help?

SharePoint 2010 is more than just SharePoint 2007 plus a bunch of new bullet points on the box. We didn’t just haphazardly build a bunch of new features, look back at the fertile seeds we planted, and muse about how “everything should work pretty well as libraries get large.” We built, and more importantly, tested all the features you’re reading about with scale in mind. We are setting new scale targets for 2010 that go above and beyond what we set in 2007. These numbers are not final yet, but we’re shooting for tens of millions of documents in a single library, depending on some specific parameters of your scenario.

When I throw out numbers like that, I’m not talking about just big, static libraries with content that just sits there. We want you to do crazy things with SharePoint 2010 like stuff a million document sets in a single document library with workflows running every which way, a hundred different retention policies firing off actions when you least expect them, and users uploading, tagging, and searching day in and day out. All the goodness of the SharePoint platform will be available to you whether you’re building a team site, a collaborative repository, a knowledge base, or a super large archive.

Like a plump, juicy sausage, much of the good stuff in SharePoint 2010 to give it delicious scalability are things that most people don’t need (or want) to know about. For the most part, scale just works. However, the chef (or information architect) is still a super important player. A well-planned repository is one that will have your users coming back for seconds and writing rave reviews; a poorly-planned one is one that will have them chugging Pepto-Bismol the next morning. Just because you can stuff a bunch of documents in a SharePoint 2010 library without your server igniting in flames the next day at doesn’t mean that you should without first thinking through how to best use the tools available to deliver an excellent experience to your end users.

So, even though scale in SharePoint 2010 just works, you’re not going to install the bits on day 1 and have a massive, searchable, beautiful content storefront on day 2. Guidance still matters, and believe me, we know it; this blog entry is just the beginning of the content we’re planning on delivering to help you on this front. I wouldn��t even call this blog entry guidance; it’s just a primer on the features and capabilities of SharePoint 2010 that you will grow to love if you’re passionate about scale at the library level – if you want to shove a whole bunch of documents in one place and have it be a great experience for both IT and your end users.

So what are these features and capabilities? Here are a few of the most important ones that I’m going to blog about now and in the near future:

  1. We protect your database backend from dangerous queries. If you run a query against any database that requires it to scan through millions of items to find the ones you’re asking for, you’re going to balk at how long it ties up the server’s CPU. Quite frankly, SharePoint is not an exception. Even in SharePoint 2010, there is a class of user operations in certain scenarios that make unreasonable demands on the backend. For these operations, our strategy is to nip them in the bud before they’re executed, which keeps your high-demand servers healthy and responsive. Knowing when this throttling will kick in and planning for it is an important part of large list planning.
  2. We give end-users tools to find content. When you have a sea of documents, the specific one you’re looking for can seem like a needle in a haystack. Structured metadata, easy tagging, metadata navigation, and built-in search refinement make this a less daunting task in SharePoint 2010 out of the box. This is an area we are particularly passionate about; after all, what good is a hugely scalable library if your end users hate it and can’t find what they’re looking for?
  3. We help developers write excellent code. In SharePoint 2007, we didn’t give developers the right tools to write code that scaled well as the amount of content in your site grew. Even worse, it was pretty hard to tell why and when code was bad, and if your site was running slowly, which one out of your ten custom web parts was bringing things down. You had to “build around” SharePoint and do things “just so” to avoid this from happening. In SharePoint 2010, we give you a bunch of tools to make this story better.

Dangerous Queries

One challenge we’ve consistently seen customers run into when building large repositories on SharePoint 2007 is trouble with large containers. As the number of documents in any single container grows – either at the root of a library, or in a folder – bad things start to happen. For one, as your document to container ratio increases, it becomes harder and harder to find exactly what you’re looking for. More serious are the performance implications of large containers. Any of the out of the box ways of retrieving content from containers in SharePoint 2007 – like the All Documents view, the Explorer view, or a Content Query web part – would work, but they don’t scale very well. Loading All Documents in a library with a million items at the root would take a long time to finish. The big problem here is that you wouldn’t be the only one affected; all your friends running SharePoint sites on that same database server would experience things slowing to a crawl as well, as the database server dutifully iterated over those million documents to find the right ones.

Why does this happen? Any time you ask for content from SharePoint, you have to specify how it’s sorted – for example, the All Documents view in SharePoint 2007 asks for the top 100 results, sorted by filename. But items aren’t sorted by filename in the SharePoint content database – so, to bring you this view, SharePoint has to gather up all these million items, sort them, and finally display the 100 ones at the top of the sorted list. Imagine this as being like flipping through the residential section of a phonebook to find the first 100 addresses, sorted in alphabetical order. This would be a miserable task, because the telephone book isn’t sorted in this way – so in order to ensure your sorted list was accurate in the end, you’d have to look through the entire residential section, from start to finish, because after all, the last person listed in the phone book might live at 1000 Aardvark Lane.

Large Lists and Fallback Queries

The laws of physics are the same in SharePoint 2010 as they were in SharePoint 2007; if you run a query that needs to touch a very large number of items, you’re going to have to wait a long time, and so will everybody else. One prominent thing we did in SharePoint 2010 is to nip these queries in the bud before they get executed. To make a long story short (you can read the long story here), a farm administrator can set a threshold which defines the maximum number of items a single SharePoint query can touch. By default this threshold is 5,000. Any library with more items than this threshold is a large list.

Let’s go back to our example of the library with one million items at the root. Say you had that library in SharePoint 2007, and you upgraded to SharePoint Server 2010. First thing you’ll see upon navigating to this library will look something like this:

clip_image001[8]

See the yellow bar above the list view? That’s a sign you have the Metadata Navigation and Filtering site feature turned on and it’s causing something magical to happen! When you load this view, SharePoint 2010 knows that you’re being greedy and asking it to scan through those million items. Since this query exceeds the maximum number of items a single query is allowed to scan (5,000) it doesn’t run the query. But who wants to stare at an empty list view? Instead of running this query as-is, SharePoint finagles it a bit and transforms it into a query that’s almost as good as the one you were asking for, but won’t make the database buckle under the pressure. In this case, we assume that it’s fairly likely that the document you’re looking for is one of the most recently created items in the library – so instead of scanning all one million items, we only scan the top 1,000 or so recently created documents, sort those by filename, and show them to you in the list view. This is what we call a simple fallback query: a query that doesn’t specify an index and asks for too many items in return, so instead of considering the entire list as being eligible for the query, SharePoint considers only the thousand or so most recently added items.

“Wait a second. You’re telling me that SharePoint throttles queries without asking me first? How on earth am I supposed to find anything in this crazy world of fallback queries and partial results?”

Let me assure you; this throttling business is a good thing. It’s a core ingredient in what makes SharePoint 2010 a resource for addressing your scale challenges. Gone are the sleepless nights where you toss and turn and worry about page faults on your database cluster resulting from Mack in Accounting stuffing 6,000 beer pong tournament photos in the root of a library in a forgotten team site in the dusty corners of your SharePoint deployment. The SharePoint 2010 feature set replaces this overarching concern with a set of well-scoped challenges; instead of worrying about every library that might get big, you get to plan for and craft experiences for the set of libraries that need to get big for business reasons.

I should mention really quickly that throttling is about more than just list views. There is a whole class of operations that involve iterating through all the documents in a list, or all the documents in a folder, that will get throttled (in other words, they will not execute) when the list or container is large. These operations include things like:

  • Adding a column to a library
  • Creating an index on a library
  • Deleting a large folder in a library

Metadata Navigation – finding and working on content in large lists

clip_image003[8]

Above is another screenshot from my million item library. This time, we’ve put a couple of SharePoint 2010 features to work. See that I have “demonstration scripts” selected in the left hand side in the tree view, and my list view is rendering without the yellow bar that’s telling me I’m only seeing newest results. That hierarchy of tags you see there represents a taxonomy, Item Type. I am browsing the documents in this library according to their Item Type; in the screenshot, I am filtering to show all documents with the value “demonstration scripts”. Here are the steps that I took to make this happen:

  1. I created a taxonomy that describes my content. You can look forward to some posts from our very own Dan Kogan on this very topic in this very blog in the near future. There’s a lot to learn here. Not just any taxonomy will do here; it needs to be one that broadly divides my content up into evenly-sized buckets. For example, if I had 990,000 demonstration scripts, the query you see above would not get me anywhere. In that case, it wouldn’t make much sense to use Item Type as a piece of metadata and a navigation hierarchy for this library; I would need to find another, more divisive way to pivot the data.
  2. I bound that taxonomy to a field in my library called Item Type. Think of a taxonomy field as a choice field on steroids. Instead of picking values out of a flat list, you pick them out of a tree.
  3. I configured that field as a navigation hierarchy. Every library now has a Metadata Navigation and Filtering settings page where you can configure navigation hierarchies (the filters you see arranged in the tree view) and key filters (the additional filters that show up beneath the tree view)

In these three easy steps, I made “Item Type” a first class navigational pivot over the data. Instead of just staring at a partial list of content at the root, I can now browse with impunity by this virtual folder structure.

Here’s a couple of cool aspects of this feature that aren’t apparent from a single nifty screenshot:

  1. Metadata navigation lets you slice and dice multiple ways. I might have a bunch of taxonomies on my library that classify content in different ways; for example, I might have a Products field, a Region field and a Competitors field, all bound to domain-specific taxonomies that classify the content along those dimensions. Depending on my current task, it might make more sense to filter by the Region field (for example, if I’m looking for the latest sales figures for the North America region). I get more filters than just my virtual folder; I can combine this filter with any number of key filters or list view column filters to drill down to just the content I want (for example, I want to see all demonstration scripts by the ECM team created after 2007).
  2. Metadata navigation thinks about indices and large lists so you don’t have to. Hey, remember just a few minutes ago when we were talking about large lists, indices, and being throttled? Well, metadata navigation thinks a lot about indices and how to run queries the “right way” to make them perform well and prevent throttling from happening. For starters, all the fields you configure as navigation hierarchies and key filters get indexed, and the resulting queries are written in a way that ensures the best index is used to make the query succeed.

You aren’t immune from the laws of physics; if you ask for documents tagged with demonstration scripts and there are 10,000 demonstration scripts, we’re not going to be able to show you all of them. In this case, though, you get something better than a simple fallback; you get an indexed fallback, which means that instead of considering the entire list, the query considers only the items that match the indexed portion of your query.

Wrap-up

This article was just the first in my series of posts about architecting and building large lists filled with discoverable content. Here’s what you can expect over the next few weeks:

  • A deep dive on metadata navigation, how it works, and some tips to getting the most out of it
  • A discussion on how other features, like Search and the Content Query Web Part, fit into the equation, and how to configure their metadata filtering capabilities
  • Some geeky developer tips on writing code that plays nicely with large lists

After that, I’ll be widening my scope a bit to talk about the overall knowledge management story in SharePoint 2010 – which is about more than just browsing for content in a library!

Lincoln DeMaris, Program Manager, ECM


Read More

O365/SPO + Azure + AuthN – Critical Path Training’s Office365 AuthN Helper Library

This post is part of a series on Office365/SharePoint Online, Windows Azure and Authentication. The other posts in this series are as follows:

In this last post in my Office365/SharePoint Online + Windows Azure + Authorization blog series, I want to introduce a little helper project I am using. To make life easier I created a little O365 authorization helper library that does a lot of the heavy lifting for you. It covers two of the three things I outlined in my series as workarounds.

Code samples I show in this post were taken from a code sample CPT’s Office365/SharePoint Online Claims Authentication Helper Library… you can get the code from the Critical Path Training site’s Members sectionlook in the Code Samples section.

Introducing the Claims Auth Friendly ClientContext: ClaimsClientContext

First it creates a special Client Site Object Model (CSOM) ClientContext object. This object has a few properties needed for authenticating with Microsoft Online (MSO) to obtain the SAML token. It then rewrires the ClientContext so that every request includes the SAML token:

   1: namespace CriticalPathTraining.Office365.AuthLibrary {

   2:   public class ClaimsClientContext : ClientContext {

   3:     public string MsoUsername { get; set; }

   4:     public string MsoPassword { get; set; }

   5:     public string MsoRootSiteCollectionUrl { get; set; }

   6:  

   7:     public ClaimsClientContext(string webFullUrl) : base(webFullUrl) { }

   8:     public ClaimsClientContext(Uri webFullUrl) : base(webFullUrl) { }

   9:  

  10:     private MsOnlineClaimsHelper _claimsHelper;

  11:     /// <summary>

  12:     /// Microsoft Online claims helper used to authenticate to 

  13:     /// SharePoint Online.

  14:     /// </summary>

  15:     private MsOnlineClaimsHelper MsftOnlineClaimsHelper {

  16:       get {

  17:         if (_claimsHelper == null) {

  18:           _claimsHelper = new MsOnlineClaimsHelper(

  19:                                     MsoUsername, 

  20:                                     MsoPassword, 

  21:                                     MsoRootSiteCollectionUrl);

  22:         }

  23:         return _claimsHelper;

  24:       }

  25:     }

  26:  

  27:     /// <summary>

  28:     /// Rewire event for client context so that 

  29:     /// every request includes authenticated cookies.

  30:     /// </summary>

  31:     protected override void OnExecutingWebRequest(WebRequestEventArgs args) {

  32:       args.WebRequestExecutor.WebRequest.CookieContainer = 

  33:         MsftOnlineClaimsHelper.CookieContainer;

  34:     }

  35:   }

  36: }

Usage is very simple… the downloadable library includes a test project that shows the usage:

   1: [TestMethod]

   2: public void CliamsClientContextTest() {

   3:   using (var context = new ClaimsClientContext(MSO_SPSITE_URL) {

   4:     MsoUsername = MSO_USERNAME,

   5:     MsoPassword = MSO_PASSWORD,

   6:     MsoRootSiteCollectionUrl = MSO_ROOT_SPSITE_URL

   7:   }) {

   8:     // get the web

   9:     var web = context.Web;

  10:     context.Load(web, w => w.Title);

  11:     context.ExecuteQuery();

  12:  

  13:     Assert.IsNotNull(web.Title);

  14:     Assert.IsTrue(web.Title.Length > 0);

  15:  

  16:     Console.WriteLine("Retrieved site title:" + web.Title);

  17:   }

  18:  

  19: }

Introducing the Claims Friendly Web Client: ClaimsWebClient

The other thing I give you is a special version of the WebClient class that’s makes working with claims a bit easier. It has a single property where you specify the CookieContainer that will contain the SAML token. The library exposes the samples Wictor Wilen provided on to do the authentication for you and generate the CookieContainer:

   1: namespace CriticalPathTraining.Office365.AuthLibrary {

   2:   public class ClaimsWebClient : WebClient {

   3:     /// <summary>

   4:     /// Cookies that should be included on every Web request.

   5:     /// </summary>

   6:     public CookieContainer CookieContainer { get; set; }

   7:  

   8:     /// <summary>

   9:     /// Override base GetWebRequest() method to always include 

  10:     /// cookies if they were specified.

  11:     /// </summary>

  12:     protected override WebRequest GetWebRequest(Uri address) {

  13:       var request = base.GetWebRequest(address);

  14:       if (request is HttpWebRequest && CookieContainer != null)

  15:         ((HttpWebRequest)request).CookieContainer = CookieContainer;

  16:  

  17:       return request;

  18:     }

  19:   }

  20: }

In the associated test project you’ll also find the usage for this as well:

   1: [TestMethod]

   2: public void ClaimsWebClientTest() {

   3:   // file to download

   4:   string fileToDownload = "/_layouts/images/siteIcon.png";

   5:  

   6:   var claimsHelper = new MsOnlineClaimsHelper(

   7:                             MSO_USERNAME, 

   8:                             MSO_PASSWORD, 

   9:                             MSO_ROOT_SPSITE_URL);

  10:   using (var webClient = new ClaimsWebClient() {

  11:     CookieContainer = claimsHelper.CookieContainer

  12:   }) {

  13:     // get the file

  14:     var fileStream = ((ClaimsWebClient)webClient).OpenRead(

  15:         string.Format("{0}{1}", MSO_SPSITE_URL, fileToDownload)

  16:     );

  17:         

  18:     // download & write local        

  19:     string tempFilePath = Path.GetTempFileName();

  20:     var tempFile = File.Open(tempFilePath, FileMode.OpenOrCreate);

  21:     fileStream.CopyTo(tempFile);

  22:     fileStream.Close();

  23:     tempFile.Close();

  24:  

  25:     Console.WriteLine("Downloaded file to:" + tempFilePath);

  26:  

  27:     // make sure file exists & bigger than 0 bytes

  28:     Assert.IsTrue(File.Exists(tempFilePath));

  29:     var fileInfo = new FileInfo(tempFilePath);

  30:     Assert.IsTrue(fileInfo.Length > 0);

  31:   }

  32: }

Last but not least, for completeness I threw in a test for working with any of the SharePoint *.ASMX or *.SVC Web services. You don’t need any special helpers here as they include a CookieContainer class already:

   1: [TestMethod]

   2: public void WebServiceTest() {

   3:   XmlNode results;

   4:  

   5:   var claimsHelper = new MsOnlineClaimsHelper(

   6:                             MSO_USERNAME, 

   7:                             MSO_PASSWORD, 

   8:                             MSO_ROOT_SPSITE_URL);

   9:   using (var client = new Lists() {

  10:     Url = string.Format("{0}_vti_bin/Lists.asmx", MSO_SPSITE_URL),

  11:     UseDefaultCredentials=true,

  12:     CookieContainer = claimsHelper.CookieContainer

  13:   }) {

  14:     results = client.GetList("Shared Documents");

  15:   }

  16:  

  17:   Assert.IsNotNull(results);

  18: }


Read More

O365/SPO + Azure + AuthN – Workarounds and Fixes for Claims-Based Auth Sites

This post is part of a series on Office365/SharePoint Online, Windows Azure and Authentication. The other posts in this series are as follows:

Now let’s see how we can address the authentication fixes for each of the different ways you can access SharePoint remotely. In this post I’ll cover each of the specific tools (REST or OData / CSOM / Web Services / WebClient) and how to address each of the tricks. Each one has it’s pros & cons, hence why I had to use all four tools in my demo in my breakout session Out of the Sandbox and into the cloud: Build your next SharePoint app on Azure at the Microsoft SharePoint Conference 2011 (see that link for where you can download the sample).

Any code samples I show in this post were taken from my session Out of the Sandbox and into the cloud: Build your next SharePoint app on Azure at the Microsoft SharePoint Conference 2011… you can get the demo code from the Critical Path Training site’s Members section… look for the AC’s SharePoint Conference 2011 Sessions download in the Presentations section.

For all the samples below, I created a private property in my class called MsftOnlineClaimsHelper that creates a local instance of the MSO helper and automatically authenticates.

   1: private MsOnlineClaimsHelper _claimsHelper;

   2: /// <summary>

   3: /// Microsoft Online claims helper used to authenticate to SharePoint Online.

   4: /// </summary>

   5: private MsOnlineClaimsHelper MsftOnlineClaimsHelper {

   6:   get {

   7:     if (_claimsHelper == null) {

   8:       _claimsHelper = new MsOnlineClaimsHelper(

   9:                 RoleEnvironment.GetConfigurationSettingValue("SharePointUsername"),

  10:                 RoleEnvironment.GetConfigurationSettingValue("SharePointPassword"),

  11:                 RoleEnvironment.GetConfigurationSettingValue("SharePointRootSiteUrl"));

  12:     }

  13:     return _claimsHelper;

  14:   }

  15: }

CSOM Client Context & CBA Challenges

One of the most common ways to work with SharePoint 2010 from off the SharePoint server is using the CSOM. Authentication with the CSOM is pretty straight forward using the ClientContext object. The trick comes into play with claims based authentication (CBA).

When you want to switch to FBA it’s a simple property switch on the ClientContext, but as I previously stated there is no such way to do this for CBA. What you need to do is rewire the ClientContext so that every request it makes to a site collection includes a SAML token to authenticate the request. You do this by trapping the ExecutingWebRequest event of the ClientContext and injecting the cookie container generated by the MSO helper into all requests:

   1: private ClientContext _clientContext;

   2: /// <summary>

   3: /// CSOM client context.

   4: /// </summary>

   5: private ClientContext CsomClientContext {

   6:   get {

   7:     if (_clientContext == null) {

   8:       _clientContext = new ClientContext(

   9:         RoleEnvironment.GetConfigurationSettingValue("SharePointSiteUrl")

  10:       );

  11:  

  12:       // wire up claim helper to include SAML tokens (cookies) in all requests

  13:       _clientContext.ExecutingWebRequest += 

  14:             (webRequestSender, args) => {

  15:                 args.WebRequestExecutor.WebRequest.CookieContainer 

  16:                     = MsftOnlineClaimsHelper.CookieContainer;

  17:             };

  18:     }

  19:     return _clientContext;

  20:   }

  21: }

Now, almost all requests the ClientContext make will include the SAML token! I say “almost” because there is a bit of an issue with the ClientContext. There is a method called File.OpenBinaryDirect() that you can use to download a file from SharePoint. For some reason this method doesn’t raise the same ExecutingWebRequest event so your token isn’t handled! Ouch… oversight in the API me thinks… regardless, you can get around this using a stock Web Client…

Web Request & CBA Challenges

The way you can address the lack of passing along the SAML token to SPO when you try to open and download a file using the File.OpenBinaryDirect() method is to simply create a simple Web request that will download the file. However this process also needs a little bit of work to pass along the SAML token. What I did was create a custom version of the WebClient class that did this for you as follows:

   1: namespace AndrewConnell.ACsCichlids.StoreFront {

   2:   public class ClaimsFriendlyWebClient : WebClient {

   3:     private CookieContainer _cookieContainer;

   4:     public CookieContainer CookieContainer {

   5:       get { return _cookieContainer; }

   6:       set { _cookieContainer = value; }

   7:     }

   8:  

   9:     protected override WebRequest GetWebRequest(Uri address) {

  10:       var request = base.GetWebRequest(address);

  11:       if (request is HttpWebRequest)

  12:         ((HttpWebRequest)request).CookieContainer = _cookieContainer;

  13:  

  14:       return request;

  15:     }

  16:   }

  17: }

This method is handy when you want to download a file from a site collection. To use it you simply pass in the MSO helper’s cookies and they will be included on all requests:

   1: using (var webclient = new ClaimsFriendlyWebClient() 

   2:     { CookieContainer = MsftOnlineClaimsHelper.CookieContainer }

   3:   ) {

   4:       // download file into a memory stream

   5:       var fileStream = ((ClaimsFriendlyWebClient)webclient).OpenRead(cichlidPicture.OriginalUri);

   6:  

   7:       // create & save blob

   8:       var blob = AzureStorageContainer.GetBlobReference(cichlidPicture.ImportedFilename);

   9:       blob.UploadFromStream(fileStream);

  10: }

REST / OData / Web Services & CBA Challenges

My preferred way to read/write data to SharePoint lists is using the RESTful OData service ListData.svc. This service, like all the other Web services that are included with SharePoint 2010 (*.ASMX & *.SVC), don’t understand claims by default. When you want to authenticate for a Windows or FBA site you have to create a network credential object and set it as a property on the service proxy.

However this isn’t available when it comes to authenticating with CBA. Like the ClientContext, you need to rewire the calls to make sure they include the SAML token in each request. This is pretty simple as most services, like the Lists.asmx service, includes a CookieContainer property:

   1: XmlNode results;

   2:  

   3: // call lists.asmx web service to get attachments for each item 

   4: //    use same "cookie container" technique to authenticate

   5: using (var client = new Lists() {

   6:   Url = siteUrl + "/_vti_bin/Lists.asmx",

   7:   UseDefaultCredentials = true,

   8:   CookieContainer = MsftOnlineClaimsHelper.CookieContainer

   9: }) {

  10:   results = client.GetAttachmentCollection(

  11:     RoleEnvironment.GetConfigurationSettingValue("SharePointManagementListName")

  12:     , listItemId.ToString()

  13:   );

  14: }

There you have it… hopefully this series & code samples will help you authenticate into your site collections in SharePoint Online!


Read More

Upgrading SharePoint SQL Servers to SQL Server 2008.

SQL Server 2008 is now released and is supported by WSS 3.0 and MOSS 2007 SP1 and above although we probably won’t see a supportability statement anytime soon due to resource constraints with the content folks. I will let you know when the official supportability statement is released.
[Update] – Support has been annouced. See http://technet.microsoft.com/en-us/library/cc262485.aspx for MOSS and http://technet.microsoft.com/en-us/library/cc288751.aspx for WSS. (look towards the bottom)
Upgrade from 2005 to 2008 is a pretty simple process. Here are some things to be aware of when using Upgrade Advisor. If you run upgrade advisor (recommended for SQL servers not dedicated to SharePoint 2007) you will see the following warnings in the report. These warnings can safely be ignored:
· Full-Text Search has changed in SQL Server 2008 – SharePoint 2007 no longer uses Full Text Search and will not be affected by these changes.
· Column aliases in ORDER BY clause cannot be prefixed by table alias – This is flagged on each instance of the proc_GetTpPageMetaData stored procedure in each content database. Though this issue suggests that this stored procedure will not work correctly in SQL 2005 and SQL 2008, it apparently does and can be ignored.
· Other Database Engine upgrade issues – The upgrade advisor doesn’t check for all possible upgrade and compatibility issues. This can be ignored for SharePoint 2007 databases.
Here are some additional things to be aware of when building out a new farm or provisioning new services on SQL server 2008.
(Thanks to Gabe Bratton and Rahul Sakdeo for this info)
· SSRS and MOSS Report Center – On servers where MOSS is installed on top of SQL server 2008 with Reporting Services the potential for a URL conflict exists since they can both end up with the same url. The workaround would be to use a non-default web site for hosting MOSS. You can distinguish the sites using and IP address, Host Header, or Port.
· Least Privilege Deployments and WSS 3.0 – There is at least one known issue where provisioning a new web application will fail if content access account is running without sufficient permissions to the new database. I haven’t reproduced this or tested a workaround, but I imagine that if you temporarily give the content access account sys admin perms on the SQL Server you will avoid this error.
You will need to install .Net 3.5 SP1 and hotfix KB942288-v4 (Windows Installer 4.5) – Update services will be stopped and started during the install. These installs will likely require a reboot. One will wait for the other to complete before prompting to reboot. (two installs – one reboot)
Run setup.exe again after reboot and choose installation > upgrade from SQL Server 2000 or SQL Server 2005 and follow the prompts. I imagine a resource constrained or large SQL instance will take some time to upgrade. I did not notice any loss of availability with my databases during the upgrade which is good since my upgrade took almost 4 hours per instance, not including the prerequisites install. I have a feeling it took so long because of the lack of memory on my virtual server. (2GB) There was lots of paging occurring during the upgrade. The upgrade process itself seems lightweight, never consuming more than a few percent of the proc and about 30MB of memory.
If using database mirroring, upgrade the mirror and witness instances first. If running a mirroring split (principal databases on both instances) fail over to one node or the other. By upgrading the mirror and witness first you will ensure mirroring continues to work during the upgrade and you will minimize downtime due to the mandatory reboot. Make sure to upgrade the witness server and mirror server before attempting a failover. Else, the failover will fail and you will end up with unprotected databases and worse, you will need to break mirroring to bring your principal databases online.
All in all, the upgrade from SQL 2005 to SQL 2008 is a straightforward process. While I recommend you test the upgrade process I doubt you will find any surprises on a dedicated SharePoint backend. I hope that IT shops fast track this upgrade so we can focus on and take advantage of the new feature set in 2008. I plan to talk about how we SharePoint folks can leverage those new features in a post in the near future. [Update] See http://blogs.msdn.com/mikewat/archive/2008/08/19/improving-sharepoint-with-sql-server-2008.aspx for information on new features….(read more)
Read More

SQL Server 2008 is now officially supported!

Microsoft has announced SharePoint Products and Technologies 2007 (SP1) support for SQL Server 2008. See http://technet.microsoft.com/en-us/library/cc262485.aspx for MOSS and http://technet.microsoft.com/en-us/library/cc288751.aspx for WSS. (look towards the bottom)
See my posts on upgrade http://blogs.msdn.com/mikewat/archive/2008/08/11/upgrading-sharepoint-sql-servers-to-sql-server-2008.aspx and new features http://blogs.msdn.com/mikewat/archive/2008/08/19/improving-sharepoint-with-sql-server-2008.aspx…(read more)
Read More

SP 2010: Uploading files using the Client OM in SharePoint 2010

Author: Tobias Zimmergren http://www.zimmergren.net | http://www.tozit.com | @zimmergren
Introduction
In this article I will guide your through the process of uploading a document to any chosen document library in SharePoint 2010 through the Client Object Model.
This application has a very simp … (More)
Read More

i need free sharepoint 2010 video tutorial?

Chosen Answer:

First you should be able to specify what sort of videos are you after. In case you are instered in Configuration/Administration checkout the videos listed on: http://technet.microsoft.com/en-us/library/cc262880.aspx

For Development how-to’s, visit http://msdn.microsoft.com/en-us/sharepoint/ee513147.aspx

Hope it helps.
by: rightsideofwrong
on: 9th December 10