PnP Guidance

GitHub - SharePoint/PnP-Guidance: A collection of community contributed Patterns and Practices guidance that is published to MSDN.
Repo URL: https://github.com/SharePoint/PnP-Guidance/
Edited by:
Cover image: Cover image
Share this using: email, Google+, Twitter, Facebook.
Exports: EPUB | MOBI

1 Introduction to PnP Transformation Process

PnP Transformation process originates from the work we originally did internally at Microsoft and also from the work we did along with our global enterprise customers towards add-in model transition. We first started sharing the project artefacts internally, but since there was clearly a lot of value for external community as well, we have decided to share the templates and samples with the wider SharePoint community. Key objective of these documents is to share the learnings from actual case studies.

Resources related on the PnP Provisioning Process

The documents in this repository concentrate on the typical transformation process from farm solutions to the add-in model. It covers the recommendations from process and implementation perceptive. This relevant information will help you and your customer to move from:

  • An on-premises environment to Office 365, or
  • Transform existing farm solutions or full trust code to “add-in model” implementations in existing on-premises environment.

The resources that are going to facilitate your transformation process are:

We encourage our partners to use these templates and help their customers in the transformation journey.

1.1 Resources

Intro Presentation: This template provides an introduction to the PnP Transformation process and describes the overall transformation approach. See also following video recording from Channel 9 as a recording of this deck.

Kick-Off Presentation Template: This template provides a deep dive into the core principles of developing for Office 365. It also states the PnP Transformation engagement requirements.

Preparedness Presentation Template: This template covers the key aspects of Office 365 development. It also outlines the roles and responsibilities of the customer and the MS partner providing the transformation guidance.

Solution Assessment Report Presentation Template: This template provides an assessment of full trust code solutions running within the SharePoint farm. It specifies the solutions that can be transformed into the add-in model and also those that might need an alternative approach.

Solution Assessment Report Document Template: This template provides a detailed assessment of all the full trust code solutions running within the SharePoint farm. It specifies the solutions that can be transformed into the add-in model and also those that might need an alternative approach

Design Phase Kick-Off Presentation Template: This template provides a introduction to the design phase of the transformation project. It provides an overview of the activities and outcomes for this phase of the project.

Solution Design Report Presentation Template: This template provides a summary of the assessment and describes the high level transformation approach for every full trust code solution in the farm.

Solution Design Report Document Template: This template provides a detailed transformation process for every full trust code solution in the farm. Alternative options are suggested for the solutions that cannot be transformed.

2 Add a custom ribbon to your SharePoint site

Add or remove a custom ribbon on your SharePoint site. Add JavaScript event handlers using the embed JavaScript technique to handle your custom ribbon’s events.

Applies to: add-ins for SharePoint | SharePoint 2013 | SharePoint Online

The Core.RibbonCommands code sample shows you how to add a custom ribbon to a SharePoint site. Use this solution if you want to:

  • Add a custom ribbon, group, or button to your SharePoint site or list.

  • Display a custom ribbon for specific content types, sites, or lists.

Note This code sample shows how to call the JavaScript functions that handle events raised by the ribbon’s buttons. This code sample does not provide the implementation of the JavaScript event handler functions for the ribbon’s buttons. To implement the JavaScript event handler functions, use the embed JavaScript technique to embed the JavaScript event handler functions on all pages where your custom ribbon appears. For more information about embedding JavaScript, see Customize your SharePoint site UI by using JavaScript.

2.1 Before you begin

<a name=“sectionSection0”> </a>

To get started, download the Core.RibbonCommands sample add-in from the Office 365 Developer patterns and practices project on GitHub.

2.2 Using the Core.RibbonCommands app

<a name=“sectionSection1”> </a>

When you run this code sample, on the start page in Register the ribbon, choose Add Ribbon. When the page refreshes, view the custom ribbon by choosing Documents > Custom Tab.

This code sample defines a custom ribbon by using Models\RibbonCommands.xml. RibbonCommands.xml defines custom ribbon groups, buttons, and UI event handlers for the ribbon. For more information, see Customizing and Extending the SharePoint 2010 Server Ribbon and Server Ribbon XML.

The custom ribbon displays on all sites and lists on the host web because RegistrationId=“0x01” and RegistrationType=“ContentType”. RegistrationId=“0x01” and RegistrationType=“ContentType” specify that the ribbon will appear for all content types that inherit from type “0x01”, which are content types that inherit from the Item class. To apply your ribbon to a custom content type, replace “0x01” with your custom content type’s ID. To apply your ribbon to a list, change the value of RegistrationType to List.

Note The code in this article is provided as-is, without warranty of any kind, either express or implied, including any implied warranties of fitness for a particular purpose, merchantability, or non-infringement.

<?xml version="1.0" encoding="utf-8" ?>
<Elements xmlns="http://schemas.microsoft.com/sharepoint/">
  <CustomAction
    Id="CustomCustomRibbonTab"
    Location="CommandUI.Ribbon.ListView"
    RegistrationId="0x01"
    RegistrationType="ContentType"
    Sequence="100"
    >
    <CommandUIExtension>
      <CommandUIDefinitions>
        <CommandUIDefinition
          Location="Ribbon.Tabs._children">
          <Tab
            Id="Ribbon.CustomRibbonTab"
            Title="Custom Tab"
            Description="Custom Tab Description"
            Sequence="501">
            <Scaling
              Id="Ribbon.CustomRibbonTab.Scaling">
              <MaxSize
                Id="Ribbon.CustomRibbonTab.MaxSize"
                GroupId="Ribbon.CustomRibbonTab.ManageCustomGroup"
                Size="OneLargeTwoMedium"/>
              <MaxSize
                Id="Ribbon.CustomRibbonTab.TabTwoMaxSize"
                GroupId="Ribbon.CustomRibbonTab.NewOpenCustomGroup"
                Size="TwoLarge" />
              <Scale
                Id="Ribbon.CustomRibbonTab.Scaling.CustomTabScaling"
                GroupId="Ribbon.CustomRibbonTab.ManageCustomGroup"
                Size="OneLargeTwoMedium" />
              <Scale
                Id="Ribbon.CustomRibbonTab.Scaling.CustomSecondTabScaling"
                GroupId="Ribbon.CustomRibbonTab.NewOpenCustomGroup"
                Size="TwoLarge" />
            </Scaling>
            <Groups Id="Ribbon.CustomRibbonTab.Groups">
              <Group
                Id="Ribbon.CustomRibbonTab.ManageCustomGroup"
                Description="Group to Custom Functions"
                Title="Manage Item"
                Sequence="52"
                Template="Ribbon.Templates.CustomTemplate">
                <Controls Id="Ribbon.CustomRibbonTab.ManageCustomGroup.Controls">
                  <Button
                    Id="Ribbon.CustomRibbonTab.ManageCustomGroup.Accept"
                    Command="CustomRibbonTab.AcceptCustomCommand"
                    Sequence="15"
                    Image32by32="{SiteUrl}/_layouts/15/1033/Images/formatmap32x32.png?rev=23"
                    Image32by32Top="-68"
                    Image32by32Left="-272"
                    Description="Accept Item"
                    LabelText="Accept"
                    TemplateAlias="AWR" />
                  <Button
                    Id="Ribbon.CustomRibbonTab.ManageCustomGroup.Reject"
                    Command="CustomRibbonTab.RejectCustomCommand"
                    Sequence="17"
                    Image16by16="{SiteUrl}/_layouts/15/1033/Images/formatmap16x16.png?rev=23"
                    Image16by16Top="-216"
                    Image16by16Left="-290"
                    Description="Reject Item"
                    LabelText="Reject"
                    TemplateAlias="RWR"/>
                  <Button
                    Id="Ribbon.CustomRibbonTab.ManageCustomGroup.Verify"
                    Command="CustomRibbonTab.VerifyCustomCommand"
                    Sequence="19"
                    Image16by16="{SiteUrl}/_layouts/15/1033/Images/formatmap16x16.png?rev=23"
                    Image16by16Top="-126"
                    Image16by16Left="-144"
                    Description="Verify Item"
                    LabelText="Verify"
                    TemplateAlias="ACWR"/>
                  <Button
                   Id="Ribbon.CustomRibbonTab.ManageCustomGroup.Close"
                   Command="CustomRibbonTab.CloseCustomCommand"
                   Sequence="19"
                   Image32by32="{SiteUrl}/_layouts/15/1033/Images/formatmap32x32.png?rev=23"
                   Image32by32Top="-0"
                   Image32by32Left="-34"
                   Description="Close Item"
                   LabelText="Close"
                   TemplateAlias="CWR"/>
                  <Button
                   Id="Ribbon.CustomRibbonTab.ManageCustomGroup.Copy"
                   Command="CustomRibbonTab.CopyCustomCommand"
                   Sequence="19"
                   Image32by32="{SiteUrl}/_layouts/15/1033/Images/formatmap32x32.png?rev=23"
                   Image32by32Top="-442"
                   Image32by32Left="-67"
                   Description="Copy Item"
                   LabelText="Copy"
                   TemplateAlias="CPWR"/>
                </Controls>
              </Group>
              <Group
                Id="Ribbon.CustomRibbonTab.NewOpenCustomGroup"
                Description="Group to manage item"
                Title="New &amp;amp; Open"
                Sequence="53"
                Template="Ribbon.Templates.CustomTemplate">
                <Controls Id="Ribbon.CustomRibbonTab.NewOpenCustomGroup.Controls">
                  <Button
                    Id="Ribbon.CustomRibbonTab.NewOpenCustomGroup.New"
                    Command="CustomRibbonTab.NewCustomCommand"
                    Sequence="15"
                    Image32by32="{SiteUrl}/_layouts/15/1033/Images/formatmap32x32.png?rev=23"
                    Image32by32Top="-33"
                    Image32by32Left="-66"
                    Description="New Item"
                    LabelText="New"
                    TemplateAlias="LOR"/>
                  <Button
                   Id="Ribbon.CustomRibbonTab.NewOpenCustomGroup.Open"
                   Command="CustomRibbonTab.OpenCustomCommand"
                   Sequence="15"
                   Image32by32="{SiteUrl}/_layouts/15/1033/Images/formatmap32x32.png?rev=23"
                   Image32by32Top="-170"
                   Image32by32Left="-138"
                   Description="Open Item"
                   LabelText="Open"
                   TemplateAlias="LORS"/>
                </Controls>
              </Group>
            </Groups>
          </Tab>
        </CommandUIDefinition>
        <CommandUIDefinition Location="Ribbon.Templates._children">
          <GroupTemplate Id="Ribbon.Templates.CustomTemplate">
            <Layout
              Title="OneLargeTwoMedium"
              LayoutTitle="OneLargeTwoMedium">
              <Section Alignment="Top" Type="OneRow">
                <Row>
                  <ControlRef DisplayMode="Large" TemplateAlias="AWR" />
                </Row>
              </Section>
              <Section Alignment="Top" Type="TwoRow">
                <Row>
                  <ControlRef DisplayMode="Medium" TemplateAlias="RWR" />
                </Row>
                <Row>
                  <ControlRef DisplayMode="Medium" TemplateAlias="ACWR" />
                </Row>
              </Section>
              <Section Alignment="Top" Type="OneRow">
                <Row>
                  <ControlRef DisplayMode="Large" TemplateAlias="CWR" />
                </Row>
              </Section>
              <Section Alignment="Top" Type="OneRow">
                <Row>
                  <ControlRef DisplayMode="Large" TemplateAlias="CPWR" />
                </Row>
              </Section>
            </Layout>
            <Layout
             Title="TwoLarge"
             LayoutTitle="TwoLarge">
              <Section Alignment="Top" Type="OneRow">
                <Row>
                  <ControlRef DisplayMode="Large" TemplateAlias="LOR" />
                </Row>
              </Section>
              <Section Alignment="Top" Type="OneRow">
                <Row>
                  <ControlRef DisplayMode="Large" TemplateAlias="LORS" />
                </Row>
              </Section>
            </Layout>
          </GroupTemplate>
        </CommandUIDefinition>
      </CommandUIDefinitions>
      <CommandUIHandlers>
        <CommandUIHandler
          Command="CustomRibbonTab.AcceptCustomCommand"
          CommandAction="javascript:GetCurrentItem('AP');"/>
        <CommandUIHandler
          Command="CustomRibbonTab.RejectCustomCommand"
          CommandAction="javascript:GetCurrentItem('RJ');"/>
        <CommandUIHandler
          Command="CustomRibbonTab.VerifyCustomCommand"
          CommandAction="javascript:GetCurrentItem('AK');"/>
        <CommandUIHandler
          Command="CustomRibbonTab.NewCustomCommand"
          CommandAction="javascript:AddNewCustom();"/>
        <CommandUIHandler
          Command="CustomRibbonTab.OpenCustomCommand"
          CommandAction="javascript:OpenExistingCustom();"/>
        <CommandUIHandler
          Command="CustomRibbonTab.CloseCustomCommand"
          CommandAction="javascript:CloseExistingCustom();"/>
        <CommandUIHandler
          Command ="CustomRibbonTab.CopyCustomCommand"
          CommandAction="Javascript:CopyCustom();"/>
      </CommandUIHandlers>
    </CommandUIExtension>
  </CustomAction>
</Elements>

Note If you use the embed JavaScript technique to implement event handling for your ribbons’ buttons, your JavaScript file must implement the methods defined in the CommandUIHandler elements. For example, your embedded JavaScript file should implement functions like GetCurrentItem and AddNewCustom.

InitializeButton_Click in Default.aspx performs the following tasks:

  1. Calls GetCustomActionXmlNode to read the XML file and return the CustomAction object defined in RibbonCommands.xml. The CustomAction object contains the ribbon customization markup.

  2. Reads several elements and attribute values from the CustomAction object.

  3. Converts the CommandUIExtension element (which defines the ribbon groups, buttons, and UI event handlers) to a string called xmlContent.

  4. Creates a new custom action by using clientContext.Web.UserCustomActions.Add.

  5. Adds the ribbon customization markup (in xmlContent) to the SharePoint site using the CustomAction.CommandUIExtension.

  6. Registers the custom ribbon by setting the CustomAction.RegistrationId and CustomAction.RegistrationType to the attribute values of the CustomAction object read in step 2.

 protected void InitializeButton_Click(object sender, EventArgs e) {
            var spContext = SharePointContextProvider.Current.GetSharePointContext(Context);

            using (var clientContext = spContext.CreateUserClientContextForSPHost()) {
                clientContext.Load(clientContext.Web, web => web.UserCustomActions);
                clientContext.ExecuteQuery();

                // Get the XML elements file and get the CommandUIExtension node.
                var customActionNode = GetCustomActionXmlNode();
                var customActionName = customActionNode.Attribute("Id").Value;
                var commandUIExtensionNode = customActionNode.Element(ns + "CommandUIExtension");
                var xmlContent = commandUIExtensionNode.ToString();
                var location = customActionNode.Attribute("Location").Value;
                var registrationId = customActionNode.Attribute("RegistrationId").Value;
                var registrationTypeString = customActionNode.Attribute("RegistrationType").Value;
                var registrationType = (UserCustomActionRegistrationType)Enum.Parse(typeof(UserCustomActionRegistrationType), registrationTypeString);

                var sequence = 1000;
                if (customActionNode.Attribute(ns + "Sequence") != null) {
                    sequence = Convert.ToInt32(customActionNode.Attribute(ns + "Sequence").Value);
                }

                // Determine if the custom action exists already.
                var customAction = clientContext.Web.UserCustomActions.FirstOrDefault(uca => uca.Name == customActionName);

                // If the custom action does not exist, create it.
                if (customAction == null) {
                    // create the ribbon.
                    customAction = clientContext.Web.UserCustomActions.Add();
                    customAction.Name = customActionName;
                }

                // Set custom action properties.
                customAction.Location = location;
                customAction.CommandUIExtension = xmlContent; // CommandUIExtension xml
                customAction.RegistrationId = registrationId;
                customAction.RegistrationType = registrationType;
                customAction.Sequence = sequence;

                customAction.Update();
                clientContext.Load(customAction);
                clientContext.ExecuteQuery();
            }
        }

2.3 Additional resources

<a name=“bk_addresources”> </a>

3 Autotagging sample add-in for SharePoint

As part of your Enterprise Content Management (ECM) strategy, you can automatically tag documents with metadata when they are created or uploaded to SharePoint.

Applies to: Office 365 | SharePoint 2013 | SharePoint Online

The ECM.AutoTagging sample shows you how to use a provider-hosted add-in to automatically tag content added to a SharePoint library with data sourced from a custom user profile property. This add-in uses remote event receivers, hosted on an Azure Web Site, to:

  • Create fields, content types, and document libraries.

  • Retrieve the value of a custom user profile property.

  • Set taxonomy fields.

Use this solution if you want to:

  • Implement event receivers in SharePoint Online.

  • Improve search results by attaching additional metadata to content when it’s created.

  • Classify your content.

  • Modernize your code before migrating to a newer version of SharePoint, and you’ve used event receivers in the past.

3.1 Before you begin

<a name=“sectionSection0”> </a>

To get started, download the ECM.AutoTagging sample add-in from the Office 365 Developer patterns and practices project on GitHub.

Before you run this add-in , do the following:

  1. Create an Azure Web Site and deploy the ECM.AutoTaggingWeb project to it.

  2. Register your add-in using the Appregnew.aspx page in Office 365.

  3. This add-in uses app-only permissions. You need to assign app-only permissions using the AppInv.aspx page in Office 365. Copy the following XML from the AppManifest.xml file to the Permission Request XML textbox on the AppInv.aspx page, as shown in Figure 1.

      <AppPermissionRequests AllowAppOnlyPolicy="true">
        <AppPermissionRequest Scope="http://sharepoint/content/tenant" Right="FullControl" />
        <AppPermissionRequest Scope="http://sharepoint/taxonomy" Right="Read" />
        <AppPermissionRequest Scope="http://sharepoint/social/tenant" Right="Read" />
      </AppPermissionRequests>

    Figure 1. Assigning app-only permissions by using the AppInv.aspx page in Office 365

    Screenshot of the AppInv.aspx page, with the App ID and Permission Request XML boxes highlighted

  4. In the ECM.AutoTaggingWeb project, in the ReceiverHelper.cs file, in the CreateEventReciever method, update the ReceiverUrl property with the URL of your Azure Web Site.

        public static EventReceiverDefinitionCreationInformation CreateEventReciever(string receiverName, EventReceiverType type)
            {
    
                EventReceiverDefinitionCreationInformation _rer = new EventReceiverDefinitionCreationInformation();
                _rer.EventType = type;
                _rer.ReceiverName = receiverName;
                _rer.ReceiverClass = "ECM.AutoTaggingWeb.Services.AutoTaggingService";
                _rer.ReceiverUrl = "https://<Your domain>.azurewebsites.net/Services/AutoTaggingService.svc";
                _rer.Synchronization = EventReceiverSynchronization.Synchronous;
                return _rer;
            }
    
  5. Package and deploy your add-in .

When you start the add-in , the start page of the Document Autotagging provider-hosted add-in displays, as shown in Figure 2. The start page shows some additional configuration steps you need to perform before you assign or remove the event receivers.

Figure 2. Additional configuration steps to be performed on the add-in start page in SharePoint

Screenshot of the autotagging add-in start page, with three setup steps highlighted.

3.2 Using the ECM.Autotagging sample add-in

<a name=“sectionSection1”> </a>

This sample uses a remote event receiver to automatically tag (add metadata to) documents that are added to a document library, with data from a custom user profile property. The process flow for autotagging documents using the remote event receiver is shown in Figure 3.

Figure 3. Process flow for tagging documents in a document library by using a remote event receiver

An illustration of the process for tagging a document in a library. When the user creates content, the add-in contacts the event receiver, which accesses the user’s profile and submits information to SharePoint.

To assign metadata to the newly created document in the document library by using a remote event receiver:

  1. A user creates or uploads new content to a document library. A remote event receiver is assigned to handle ItemAdding or ItemAdded events on this document library.

  2. The ItemAdding or ItemAdded method makes a call to the remove event receiver.

  3. The provider-hosted add-in fetches the value of a custom user profile property in the User Profile Service of SharePoint for that user. In this sample add-in , the Classification custom user profile property that was added previously is retrieved.

  4. The remote event receiver updates the metadata on the new document with the value of the custom user profile property for that user.

3.2.1 Run Scenario 1

When you choose the button Run Scenario 1, the add-in does the following:

  1. Creates a document library.

  2. Creates the remote event receiver for the ItemAdding event.

    Note This article discusses the ItemAdding event receiver type. Generally, the ItemAdding event receiver type performs better than the ItemAdded event receiver type. The ECM.Autotagging sample provides code for both the ItemAdding and ItemAdded event receiver types.

  3. Adds the remote event receiver to the document library.

The following code, in the btnScenario1_Click method of the Default.aspx.cs page in the ECM.AutoTaggingWeb project, shows these steps.

Note The code in this article is provided as-is, without warranty of any kind, either express or implied, including any implied warranties of fitness for a particular purpose, merchantability, or non-infringement.

protected void btnScenario1_Click(object sender, EventArgs e)
        {
            var _libraryToCreate = this.GetLibaryInformationItemAdding();
 
            var spContext = SharePointContextProvider.Current.GetSharePointContext(Context);
            using (var ctx = spContext.CreateUserClientContextForSPHost())
            {
                try 
                { 
                    if(!ctx.Web.ListExists(_libraryToCreate.Title))
                    {
                        ScenarioHandler _scenario = new ScenarioHandler();
                        _scenario.CreateContosoDocumentLibrary(ctx, _libraryToCreate);
                    }
                    List _list = ctx.Web.Lists.GetByTitle(_libraryToCreate.Title);
                    EventReceiverDefinitionCreationInformation _rec = ReceiverHelper.CreateEventReciever(ScenarioHandler.AUTOTAGGING_ITEM_ADDING_RERNAME, EventReceiverType.ItemAdding);
                    ReceiverHelper.AddEventReceiver(ctx, _list, _rec);
                }
                catch(Exception _ex)
                {

                }
            }
        }  

A call is made to the CreateContosoDocumentLibrary method. The following code in the ScenarioHandler.cs file uses methods from OfficeDevPnP.Core to create a custom document library with a custom content type. The default content type in the document library is removed.

public void CreateContosoDocumentLibrary(ClientContext ctx, Library library)
        {
            // Check the fields.
            if (!ctx.Web.FieldExistsById(FLD_CLASSIFICATION_ID))
            {
                ctx.Web.CreateTaxonomyField(FLD_CLASSIFICATION_ID,
                                            FLD_CLASSIFICATION_INTERNAL_NAME,
                                            FLD_CLASSIFICATION_DISPLAY_NAME,
                                            FIELDS_GROUP_NAME,
                                            TAXONOMY_GROUP,
                                            TAXONOMY_TERMSET_CLASSIFICATION_NAME);
            }

            // Check the content type.
            if (!ctx.Web.ContentTypeExistsById(CONTOSODOCUMENT_CT_ID))
            {
                ctx.Web.CreateContentType(CONTOSODOCUMENT_CT_NAME,
                                          CT_DESC, CONTOSODOCUMENT_CT_ID,
                                          CT_GROUP);
            }

            // Associate fields to content types.
            if (!ctx.Web.FieldExistsByNameInContentType(CONTOSODOCUMENT_CT_NAME, FLD_CLASSIFICATION_INTERNAL_NAME))
            {
                ctx.Web.AddFieldToContentTypeById(CONTOSODOCUMENT_CT_ID,
                                                  FLD_CLASSIFICATION_ID.ToString(),
                                                  false);
            }

            
            CreateLibrary(ctx, library, CONTOSODOCUMENT_CT_ID);
        }

private void CreateLibrary(ClientContext ctx, Library library, string associateContentTypeID)
        {
            if (!ctx.Web.ListExists(library.Title))
            {
                ctx.Web.AddList(ListTemplateType.DocumentLibrary, library.Title, false);
                List _list = ctx.Web.GetListByTitle(library.Title);
                if (!string.IsNullOrEmpty(library.Description))
                {
                    _list.Description = library.Description;
                }

                if (library.VerisioningEnabled)
                {
                    _list.EnableVersioning = true;
                }

                _list.ContentTypesEnabled = true;
                _list.RemoveContentTypeByName("Document");
                _list.Update();
                
     
                ctx.Web.AddContentTypeToListById(library.Title, associateContentTypeID, true);
                ctx.Web.Context.ExecuteQuery();
               
            }
            else
            {
                throw new Exception("A list, survey, discussion board, or document library with the specified title already exists in this Web site.  Please choose another title.");
            }
        }

After this code runs, the AutoTaggingSampleItemAdding document library is created in Site Contents, as shown in Figure 4.

Figure 4. AutoTaggingSampleItemAdding document library

Screenshot shwoing the Site Contents page with the new AutoTaggingSampleItemAdd document library.

In the ECM.AutoTaggingWeb project, in the ReceiverHelper.cs file, the CreateEventReciever method creates the ItemAdding event receiver definition. In the ECM.AutoTaggingWeb project, the Services folder includes a web service called AutoTaggingService.svc. When you published the ECM.AutoTaggingWeb project to your Azure Web Site, this web service was also deployed to your site. The CreateEventReciever method assigns this web service as the remote event receiver on the document library. The following code from the CreateEventReciever method shows how to assign the web service to the remote event receiver.

public static EventReceiverDefinitionCreationInformation CreateEventReciever(string receiverName, EventReceiverType type)
        {

            EventReceiverDefinitionCreationInformation _rer = new EventReceiverDefinitionCreationInformation();
            _rer.EventType = type;
            _rer.ReceiverName = receiverName;
            _rer.ReceiverClass = "ECM.AutoTaggingWeb.Services.AutoTaggingService";
            _rer.ReceiverUrl = "https://<Your domain>.azurewebsites.net/Services/AutoTaggingService.svc";
            _rer.Synchronization = EventReceiverSynchronization.Synchronous;
            return _rer;
        }

The following code from the AddEventReceiver method assigns the remote event receiver to the document library.

public static void AddEventReceiver(ClientContext ctx, List list, EventReceiverDefinitionCreationInformation eventReceiverInfo)
        {
            if (!DoesEventReceiverExistByName(ctx, list, eventReceiverInfo.ReceiverName))
            {
                list.EventReceivers.Add(eventReceiverInfo);
                ctx.ExecuteQuery();
            }
        }

Now, the remote event receiver is added to the document library. When you upload a document to the AutoTaggingSampleItemAdding document library, the document will be tagged with the value of the Classification custom user profile property for that user. Figure 5 shows how to view the properties on a document. Figure 6 shows the document’s metadata with the Classification field.

Figure 5. Viewing document properties

Screenshot of a test document in the library with the properties expanded.
Figure 6. Classification field in the document metadata

Screenshot showing the metadata of the test document, with HBI in the Classification field.

The HandleAutoTaggingItemAdding method, in the AutoTaggingService.svc.cs file, uses the GetProfilePropertyFor method to retrieve the value of the Classification user profile property.

public void HandleAutoTaggingItemAdding(SPRemoteEventProperties properties,SPRemoteEventResult result)
        {
            using (ClientContext ctx = TokenHelper.CreateRemoteEventReceiverClientContext(properties))
            {
                if (ctx != null)
                {
                    var itemProperties = properties.ItemEventProperties;
                    var _userLoginName = properties.ItemEventProperties.UserLoginName;
                    var _afterProperites = itemProperties.AfterProperties;
                    if(!_afterProperites.ContainsKey(ScenarioHandler.FLD_CLASSIFICATION_INTERNAL_NAME))
                    {
                        string _classficationToSet = ProfileHelper.GetProfilePropertyFor(ctx, _userLoginName, Constants.UPA_CLASSIFICATION_PROPERTY);
                        if(!string.IsNullOrEmpty(_classficationToSet))
                        { 
                            var _formatTaxonomy = AutoTaggingHelper.GetTaxonomyFormat(ctx, _classficationToSet);
                            result.ChangedItemProperties.Add(ScenarioHandler.FLD_CLASSIFICATION_INTERNAL_NAME, _formatTaxonomy);
                        }
                    }
                }
            }
        }

Important After retrieving the Classification value from the GetProfilePropertyFor method, the Classification value must be formatted in a certain way before it can be stored as metadata on the document. The GetTaxonomyFormat method in the AutoTaggingHelper.cs file shows how to format the Classification value.

public static string GetTaxonomyFormat(ClientContext ctx, string term)
        { 
            if(string.IsNullOrEmpty(term))
            {
                throw new ArgumentException(string.Format(EXCEPTION_MSG_INVALID_ARG, "term"));
            }
            string _result = string.Empty;
            var _list = ctx.Web.Lists.GetByTitle(TAXONOMY_HIDDEN_LIST_NAME);
            CamlQuery _caml = new CamlQuery();

            _caml.ViewXml = string.Format(TAXONOMY_CAML_QRY, term);
            var _listItemCollection = _list.GetItems(_caml);

            ctx.Load(_listItemCollection,
                eachItem => eachItem.Include(
                    item => item,
                    item => item.Id,
                    item => item[TAXONOMY_FIELDS_IDFORTERM]));
            ctx.ExecuteQuery();

            if (_listItemCollection.Count > 0)
            {
                var _item = _listItemCollection.FirstOrDefault();
                var _wssId = _item.Id;
                var _termId = _item[TAXONOMY_FIELDS_IDFORTERM].ToString(); ;
                _result = string.Format(TAXONOMY_FORMATED_STRING, _wssId, term, _termId);
            }

            return _result;
        }

3.2.2 Remove Event Scenario 1

When you choose the button Remove Event Scenario 1, the following code runs to remove the event receiver from the document library.

public static void RemoveEventReceiver(ClientContext ctx, List list, string receiverName)
        {
            ctx.Load(list, lib => lib.EventReceivers);
            ctx.ExecuteQuery();

            var _rer = list.EventReceivers.Where(e => e.ReceiverName == receiverName).FirstOrDefault();
            if(_rer != null)
            {
                _rer.DeleteObject();
                ctx.ExecuteQuery();
            }
        }

3.3 Additional resources

<a name=“bk_addresources”> </a>

4 Branding and site provisioning solutions for SharePoint 2013 and SharePoint Online

The introduction of the Cloud Add-in Model and add-ins for SharePoint provides alternatives to existing, established ways of branding and provisioning SharePoint sites.

Applies to: Office 365 | SharePoint 2013 | SharePoint Online

Up to now, you might have used the SharePoint feature framework, site templates, web templates, and site definitions to provision your sites and site collections. The remote provisioning pattern shows you how to create custom add-ins for SharePoint that provision site branding and perform other site provisioning tasks.
The articles in this section provide information about using add-ins for SharePoint to provision and manage site branding, a pattern that is sometimes referred to as remote provisioning.

4.1 What you need to know about SharePoint

<a name=“sectionSection0”> </a>

To use the branding and site provisioning capabilities in SharePoint, you’ll need to be familiar with the following:

  • Key SharePoint terms and concepts.

  • The hierarchy of SharePoint structural elements.

  • Types of SharePoint sites and why you would use them.

  • The file system and content databases, and how they work together.

  • File customization states and their long-term impact on upgrades.

  • add-ins for SharePoint.

  • Client-side programming in SharePoint with the .NET client-side object model (CSOM) and REST APIs.

4.2 Key SharePoint terms and concepts

<a name=“sectionSection1”> </a>

The following table lists terms and concepts that are useful to know as you start to work with SharePoint site provisioning and branding with the remote provisioning pattern.

SharePoint terms and concepts

Term or concept Description For more information
app for SharePoint Lightweight web applications that integrate popular web standards and technologies to extend the capabilities of a SharePoint website by using the Cloud Add-in Model. Build add-ins for SharePoint
App web The website from which an add-in is deployed. Host webs, add-in webs, and SharePoint components in SharePoint 2013
Approval workflow Workflows specific to publishing sites that specifies who approves the publication of a page and when. SharePoint 2013 approval workflow, Get started with workflows in SharePoint 2013
ClientContext A central object that serves as a “center of gravity” for all SharePoint CSOM and JSOM operations. T:Microsoft.SharePoint.Client.ClientContext
Cloud Add-in Model add-ins for SharePoint are self-contained pieces of functionality that extend the capabilities of a SharePoint website. You can use the Cloud Add-in Model to author and deliver secure, reliable, flexible, and consistent add-ins for SharePoint. Overview of add-ins for SharePoint
Content database Content databases store all content for a site collection, including the following: <ul xmlns:xlink=“http://www.w3.org/1999/xlink” xmlns:mtps=“http://msdn2.microsoft.com/mtps” xmlns:mshelp=“http://msdn.microsoft.com/mshelp” xmlns:ddue=“http://ddue.schemas.microsoft.com/authoring/2003/5” xmlns:msxsl=“urn:schemas-microsoft-com:xslt”><li><p>Site documents and files in document libraries</p></li><li><p>List data</p></li><li><p>Web Part properties</p></li><li><p>User names and rights</p></li></ul> Database types and descriptions
CSOM Client-side object model. A model for writing client-side code for SharePoint using the .NET Framework. Get started using the client object model with external data in SharePoint 2013<br/>[MS-CSOM]: SharePoint Client Query Protocol<br/>SharePoint 2013 .NET Server, CSOM, JSOM, and REST API index
Hive SharePoint’s physical files; the files in the file system. These files are distinct from content stored in a content database. The following is the location of the hive. %program files%/Common Files/Microsoft Shared/Web Server Extensions/15/
Host web The website on which an add-in is installed. Host webs, add-in webs, and SharePoint components in SharePoint 2013
OneDrive for Business A personal library for storing and organizing work documents that are shareable within your organization. OneDrive for Business is different from OneDrive OneDrive for Business is different from your team site
Remote provisioning A model that provisions sites by using templates and code that runs outside SharePoint in a provider-hosted add-in. Site provisioning techniques and remote provisioning in SharePoint 2013<br/>Self-service site provisioning using add-ins for SharePoint 2013
REST A stateless architectural style that abstracts architectural elements and uses HTTP verbs to read and write data from webpages that contain XML files. How to: Set custom permissions on a list by using the REST interface
Root web The first web inside of a site collection. The root web is also sometimes referred to as the “Web Application Root.”
SharePoint Online Cloud-based SharePoint offering in Office 365. SharePoint Online General Reference
Site A group of sites that share the same owner and administrative settings, such as permissions. T:Microsoft.SharePoint.Client.Web
Site collection A group of sites that share the same owner and administrative settings, such as permissions. Create a site collection in SharePoint 2013
Site provisioning A process for equipping sites with the features, structure, brand, and other functionality. Site provisioning techniques and remote provisioning in SharePoint 2013 Self-service site provisioning using add-ins for SharePoint 2013
Subsite A single SharePoint site in a SharePoint site collection. A subsite can inherit navigation and permissions from a parent site, or it can have unique permissions and navigation. You can create subsites that are based on the root site collection, or sites based on other site collections. You can choose to inherit permissions from the site collection or specify unique permissions for the subsite.

4.3 Hierarchy of SharePoint structural elements

<a name=“sectionSection2”> </a>

Conceptually, the hierarchy of SharePoint objects is expressed in terms of containers: the types of objects and the type of the hierarchy contain all the types of objects below them in the hierarchy. Table 2 lists the hierarchy of SharePoint structural elements.

Table 2. SharePoint structural elements

Object type (in hierarchical order) Description
Web applications Applications that exist on a server and respond to requests from a browser. Web applications are the central structure in Internet Information Services (IIS). In SharePoint, a web application is a website with a unique URL and a separate content database stored in SQL Server.
Site collections Containers of sites that define permissions, and that can define some aspects of branding, depending on the configuration, for all sites within the container.
Sites A collection of lists, libraries, structure, navigation, and look and feel elements usually organized around a central topic or theme. Sites that are children of other sites in the same site collection are sometimes called subsites. A subsite is a site that is stored in a subfolder of the parent website. A subsite can inherit permissions and navigation structure from its parent site, or administration and authoring permissions might be unique for the subsite. Subsites can have child subsites.
Apps, lists, and document libraries Containers of content and data that are organized into specific structures. The Master Page Gallery is a special document library in SharePoint 2010 publishing sites where all branding elements - master pages, page layouts, JavaScript files, CSS, and images - are stored by default. Every site has its own Master Page Gallery. In Team sites, the master page comes from the site, not the site collection.
Items Individual pieces of content or data that are contained in add-ins, lists, and document libraries.

4.4 add-ins for SharePoint

<a name=“sectionSection3”> </a>

add-ins for SharePoint are lightweight solutions that don’t install on the SharePoint host server, which means they don’t make excessive API calls to the host server. You can build add-ins for SharePoint by using the Cloud Add-in Model. Users can discover and download add-ins from the Office Store or from the enterprise’s App Catalog. For more information, see Overview of add-ins for SharePoint.

4.5 File system and content databases, and how they work together

<a name=“sectionSection4”> </a>

To understand your branding options and the implications that site customization can have on upgrade and migration, you’ll need to understand the SharePoint file system and content databases.

4.5.1 File system

SharePoint stores files in the file system (“hive”). In SharePoint 2013, this location is called the “15-hive”. The following is the path to the 15-hive.

`%program files%/Common Files/Microsoft Shared/Web Server Extensions/15/`

The 15-hive includes several subfolders that store files you’ll use when branding and provisioning sites.

4.5.2 Content databases

Content databases store SharePoint content objects, such as site collections. A content database is automatically installed for every site collection when you deploy SharePoint 2013. All the content for a site collection is stored in one content database on one server. However, a content database can be associated with more than one site collection, and you can attach content databases to a SharePoint web application. You might need to move content from one content database to another, for example when the size of the content will soon exceed the size of the content database.

Some characteristics of a content database vary depending on how the site collection is used. For example, sites are often write-intensive, while other types of content, such as read-only documents, are read-intensive. How content is used affects aspects of the content database, such as size and performance.

4.6 File customization states and their effects on upgrade

<a name=“sectionSection5”> </a>

The state of SharePoint files and content affects how easy it is to apply updates, and controls whether SharePoint serves the file from the content database or the file system. By default, all SharePoint files are uncustomized and ghosted, and reside in matching states in the SharePoint file system and in the content database. When a file, a content database entry, or both are used in specific ways or changed, the state of that content might be affected.

Table 3. File and content states

File or content state Definition Comment
uncustomized An attribute associated with a file that indicates that it hasn’t been modified. More than one copy of a file can point to the same source. This makes it easier for designers to implement changes.
customized An attribute associated with a file that indicates that it has been modified. After a file becomes customized, it becomes more difficult to apply broad updates.Be very careful about what you customize. As a general rule, it’s better to use the default SharePoint files and functionality than to customize system files or introduce customizations that need to be manually updated.
ghosted A file with a source that is stored outside the content database. A pointer in the content database (the ghost of the file) still exists that tells SharePoint to look for the file’s source on the server’s file system.
unghosted An uncustomized version of the source file resides in the content database. Example: The SharePoint 2013 Design Manager creates a sandboxed solution to package branding files. It’s never added to the file system of the server, therefore by definition its files are considered ghosted. But, the files it deploys are still in an uncustomized state.

Note If a file has been customized, it won’t be updated when you install new service packs or the SharePoint Online service is updated.

4.7 Site branding and provisioning with the Cloud Add-in Model

<a name=“sectionSection6”> </a>

In SharePoint 2013, you can use custom CSOM code in add-ins for SharePoint to provision SharePoint site collections, sites, and subsites with branding elements. This site provisioning pattern is called remote provisioning. SharePoint is increasingly focused on cloud-based deployments, so this pattern was created to help you use SharePoint’s out-of-the-box capabilities to provision site branding in a way that reduces complexity and long-term operational costs.

4.7.1 What can I do with the Cloud Add-in Model?

Sometimes, there is no correlation between features in full-trust code and the Cloud Add-in Model. When developing a customization based on add-ins for SharePoint and the Cloud Add-in Model, consider an alternative approach rather than a direct conversion, and strive to keep customizations as simple as possible. Here are some examples:

  • Replace event receivers with remote event receivers (see How to: Create a remote event receiver).

  • Replace site templates, web templates, and site definitions with remote provisioning. This works for both subsites and site collections.

  • Replace timer jobs with Microsoft Azure or on-premises worker roles.

Some things, such as HTTP modules and HTTP handlers can not be built with the Cloud Add-in Model. Before you try to replicate an existing customization in the Cloud Add-in Model, first consider why these customizations were built and whether an out-of-the-box SharePoint feature can work.

4.7.2 Remote provisioning pattern

Remote provisioning uses new add-in patterns to move provisioning logic outside of the SharePoint farm entirely. This approach eliminates the need to use the feature framework or other customizations in the SharePoint farm, and instead enables you to control customizations outside of SharePoint. This approach makes it possible to update and change the provisioning engine without affecting SharePoint availability. For more information about the feature framework, see Site Definitions and Provisioning: the Feature Framework. Aspects and implementations of the remote provisioning pattern are documented in detail in this section. You may find it useful to get started with the following introductions to the pattern:

In the simplest implementation of the remote provisioning pattern, provisioning requirements are stored in a SQL Server or SQL Azure database or XML file; then, an add-in for SharePoint reads requirements from the data source, reads branding elements from their source location, and applies branding elements to the target site based on the provisioning requirements.

The branding and provisioning code samples follow this sequence of events to show the remote provisioning pattern.

Table 4. Basic remote provisioning sequence and associated samples

Step Description Samples Article
1 The user requests a change to the site through a form, which kicks off an approval workflow. The data that the user submits via the request form are stored using potentially any data storage format (SQL, SQL Azure, XML). <p>SharePoint 2013: Use workflow to provision a SharePoint site (host web)</p><p>SharePoint 2013: Use workflow to provision a SharePoint site (app web)</p> SharePoint 2013 site provisioning
2 If the workflow is approved, the add-in for SharePoint calls the stored data and provisions the site according to the metadata that user submitted in step 1. <p>Provision sites in batches with the add-in model</p><p>SharePoint 2013: Use workflow to provision a SharePoint site (host web)</p><p>SharePoint 2013: Use workflow to provision a SharePoint site (app web)</p><p>SharePoint 2013: Use add-ins for SharePoint to provision on-prem site collection</p> SharePoint 2013 site provisioning
3 The add-in for SharePoint scopes provisioning to the instructions in the request form by using the data available in the add-in web and content database. During this stage, applicable branding elements are provisioned to the site. <p>SharePoint 2013: Use an add-in for SharePoint to configure CSS</p><p>SharePoint 2013: Use an add-in for SharePoint to apply a theme to a SharePoint site</p><p>SharePoint 2013: Brand a SharePoint OneDrive For Business site</p><p>SharePoint 2013: Provision custom CSS to a site with remote provisioning</p><p>SharePoint 2013: Use an add-in for SharePoint to provision a wiki page</p> <p>SharePoint pages and the page model</p><p>SharePoint site branding and page customization solutions</p><p>SharePoint 2013 site provisioning

Note Table 4 lists the steps that might be typical of a remote provisioning scenario. The samples you use depend on the approach that works best for your enterprise. For example, if you don’t have a business need to create a custom approval workflow, you won’t use that sample.

Figure 1. Example of a site provisioning and branding workflow using the remote provisioning pattern

A flowchart that shows the site provisioning and branding workflow using remote provisioning

4.7.3 How remote provisioning affects pre-existing site content

Depending on the specific site elements you want to provision, your code will override default or pre-existing site content with a hook for the remote provisioning add-in for SharePoint. The add-in will select site templates and other capabilities based on the provisioning requirements stored in the database, without configuring SharePoint at all.

The basic remote provisioning pattern is the same regardless of additional requirements. However, when you plan to use this pattern to provision site branding, map your brand development strategy in the context of the customization capabilities that SharePoint CSOM, JSOM, and REST APIs provide (the code samples described in this section use CSOM). Also consider:

  • Site architecture. Are you building an Internet-facing site, an intranet site, or an extranet that requires authorized users to log on through the Internet-facing site to access company data?

  • The degree of control that specific users have to define and request provisioning requirements. Should users be able to specify custom provisioning options using a form? Are changes applied to the site automatically, only after people with decision-making power approve the changes, or are they managed by a governance policy?

  • The types of branding customizations you want to apply (structural, look and feel, or both).

4.8 Branding and site provisioning code samples

<a name=“sectionSection7”> </a>

The code samples described in this section show the core scenario and extend it to cover some more specific use cases. The articles in this section also include some code examples. The following tables list and describe the samples.

Table 5. Site provisioning samples

Sample Description Related article
Batch Provisioning Provisions site collections in a console app. SharePoint 2013 site provisioning
Provisioning.Pages Shows how to use the remote provisioning model to provision a Wiki page and add remote Web Parts and HTML from the Wiki page. SharePoint pages and the page model
SiteProvisioningWorkflow Provisions site collections with a workflow on the host web and a remote event receiver. SharePoint 2013 site provisioning
SiteProvisioningWorkflowAppWeb Provisions site collections with a workflow on the add-in web and a remote event receiver. SharePoint 2013 site provisioning

Note The BatchProvisioning, SiteProvisioningWorkflow, and SiteProvisioningWorkflowAppWeb samples demonstrate the core concepts and functions of the remote provisioning pattern. The ProvisionWikiPages sample addresses a specific use case (Wiki page provisioning).

Table 6. Branding samples

Sample Description Related article
Branding.Theme Shows how to apply a theme (CSOM). SharePoint site branding and page customization solutions
OD4B.Configuration.Async Shows how to use the remote provisioning model to provision a Wiki page and add remote Web Parts and HTML from the Wiki page. SharePoint site branding and page customization solutions
Branding.AlternateCSSAndSiteLogo Shows how to set custom CSS to the host site by using a user customer action and embedded JavaScript (CSOM). SharePoint site branding and page customization solutions
Provisioning.OnPrem.Async Provisioning.SiteCol.OnPrem Shows how to use a service to encapsulate all of the information in SharePoint host web to thePrem) Shows how to use a service to encapsulate all o the information in SharePoint host web to the add-in web, and get a web, and get a list of site collections in a specified web application and create a content type with a specific ContentTypeId. This sample is especially useful when you want to use the remote provisioning pattern to provision sites using add-ins for SharePoint, but the CSOM member you need to complete your scenario is not yet available in CSOM.

4.9 SharePoint branding workflow

<a name=“sectionSection8”> </a>

Branding a SharePoint website is a lot like branding other websites. You use web technologies that you’re familiar with, such as HTML, CSS, and JavaScript to build the structure, look and feel, and custom behavior of your sites. SharePoint is also based on ASP.NET, and uses a page model that is very similar to the ASP.NET master page/page layout model. The page model encompasses the structure and provides hooks and logic for applying look and feel elements.

SharePoint provides several Web Parts you can use to incorporate data views, images, scripts, search results, and more into your site design. Composed looks provide an easy way for users to customize the look and feel of their site while reinforcing designer and IT department control over design details and look and feel options that are available, and both the theming engine and custom CSS capabilities open the door for more advanced branding customizations.

The branding design and development workflow for SharePoint websites closely resembles the design workflow the industry uses:

  • Plan your site architecture and design.

  • Create design assets using familiar web design tools and technologies.

  • Build your site using SharePoint tools such as Design Manager.

  • Package your site design, and use add-ins for SharePoint and the remote provisioning pattern to provision site branding.

Note Applying branding in SharePoint means modifying the look and feel of a default SharePoint site. This can include making both structural and cosmetic changes to the site’s appearance

4.9.1 Branding cost and complexity

Branding changes range from low-cost and simple to high-cost and complex. Through the UI, users can apply composed looks, which include a background image, color palette, fonts, and a master page associated with these elements, and a preview file associated with the master page. You can use the SharePoint 2013 theming engine to create your own themes, and you can create custom CSS to modify the look and feel of your site.

Important Although it’s possible to create custom master pages and other structural elements as part of a custom branding project, the long-term cost of supporting structural customizations can be high, and might make it more costly for your organization to apply upgrades and support the long-term applicability of short-term investments in customization.

4.9.2 Branding SharePoint sites hosted on-premises or on a dedicated farm

You can use the remote provisioning pattern to brand Team sites, Publishing sites, and OneDrive for Business sites that are hosted on-premises or on a dedicated farm at both the site collection and subsite level.

4.9.3 SharePoint Online

Part of planning a SharePoint branding project is deciding which types of site(s) you want to build, brand, and provision. SharePoint Online licensing affects whether publishing site capabilities are available to you. While all licenses enable you to specify at least one public website that has some of the features of a SharePoint Server Publishing site, not all licenses provide full Publishing site capabilities.

Table 7. Site options in SharePoint Online

|Office 365 edition|Team site|Public website|Publishing site|Notes|
|:—–|:—–|:—–|:—–|:—–|
|Small Business|Yes|Yes|No|Includes one Team site and the public website. Does not include Publishing site functionality. The public website capabilities were designed with small business in mind.|
|Enterprise|Yes|No|Yes|Includes a Team site collection at the root web application for the domain that does not include Publishing, and you can create new Publishing site collections under that root web application. |
For more information, see Select an Office 365 plan for business and Model: Design and branding in SharePoint 2013.

4.10 When should I customize?

<a name=“sectionSection9”> </a>

Most functionality you need to meet your business needs is available out-of-the-box in SharePoint. Therefore, before creating a customization, determine whether there is an actual business case for creating the customization and what the long-term costs of creating and supporting this customization would be to the enterprise. How are features and functionality provided for end users? Look at business goals and user experience considerations before technology.

When working with an existing custom SharePoint solution and weighing whether and how to migrate it to the Cloud Add-in Model, first understand why the customization was done and what purpose it serves.

When considering moving an existing customization from full-trust code to the Cloud Add-in Model, there usually isn’t a one-to-one relationship between features and functionality. Rather than trying to find a one-to-one match between server-side and client-side code, consider alternative approaches. Table 8 maps some commonly used concepts and functionality of SharePoint solutions to their equivalents in add-ins for SharePoint.

Table 8. Mapping SharePoint concepts to add-ins

Task In SharePoint solution In add-ins for SharePoint Guidance
Display information in SharePoint pages Web Parts App Parts Web parts run on the SharePoint Server with user permission or full-permissions/elevated privilege.App parts run in the browser or on an external server with an app identity with specifically granted permissions. They are completely isolated on the client in their own domain. Add-in parts are executed outside of SharePoint and incur no performance impact on the SharePoint Server.<br />How to: Create add-in parts to install with your add-in for SharePoint
Create and manage notifications Event receivers and feature receivers Remote event receivers and add-in event receivers Event receivers and feature receivers require server-side code and can’t notify external systems of events.Remote event receivers use client-side code, can be used in SharePoint solutions or add-ins for SharePoint, and can notify external systems of events.App event receivers execute code when add-ins are installed, uninstalled, or upgraded.<br />Handling events in add-ins for SharePoint How to: Create an event receiver for an add-in for SharePoint
Access data the .NET server object model (SSOM), .NET client object model (CSOM), and OData .NET client object model (CSOM, JSOM), OData, REST, cross-domain libraries How to: Complete basic operations using SharePoint 2013 client library code<br />How to: Complete basic operations using JavaScript library code in SharePoint 2013<br />Get started with the SharePoint 2013 REST service<br />.NET client API reference for add-ins for SharePoint<br />JavaScript API reference for add-ins for SharePoint<br />REST API reference and samples
Package and deploy Solution packages (WSPs, feature packages) App catalog and Office Store Solution packages are difficult to deploy across a SharePoint farm.You can publish an add-in for SharePoint to the Office Store if you want to make it publicly available or sell it. Use the add-in catalog to make an add-in for SharePoint available within your organization.Guidance and code samples in the solution pack demonstrate how to use add-ins for SharePoint to provision branding elements to your SharePoint site. How to: Set up an add-in catalog on SharePoint Online<br />How to: Set up an add-in catalog on SharePoint<br />Publish add-ins for Office and SharePoint to make them available to users<br />Choose patterns for developing and hosting your add-in for SharePoint
Use external data External content types App-scoped external content types SharePoint site administrators or SharePoint Designer users must create and/or install external content types, which, can be installed only at the farm level.App-scoped external content types apply only to the add-in for SharePoint for which they were created, require no administration, and can access OData sources.<br />App-scoped external content types in SharePoint 2013<br />How to: Create an external content type from an OData source in SharePoint 2013
Add custom pages and master pages Application pages and site pages Web-hosted pages Application pages are shared across all sites on the server and are hosted on SharePoint.Site pages are hosted by SharePoint, and require that page controls be listed in a safe controls list.While application pages are ideal for custom code, custom code on site pages will break after customization. Instead, use Web hosted pages. They are designed to be customizable, support the use of built-in Web Parts on site pages, are hosted externally, and are available anywhere the add-in is installed.

4.11 In this section

<a name=“sectionSection10”> </a>

4.12 Additional resources

<a name=“bk_addresources”> </a>

5 Bulk upload documents sample add-in for SharePoint

As part of your Enterprise Content Management (ECM) strategy, you can bulk upload documents to document libraries, including OneDrive for Business.

Applies to: Office 365 | SharePoint 2013 | SharePoint Online

Note The sample uploads one file to a document library. To upload multiple files, you’ll need to extend the sample.

This add-in uses a console application to upload files by using REST API calls. Configuration settings are specified in an XML and a CSV file. Use this solution if you want to:

  • Upload files to SharePoint Online.

  • Migrate to Office 365 and use a custom migration tool to move your files.

5.1 Before you begin

<a name=“sectionSection0”> </a>

To get started, download the Core.BulkDocumentUploader sample add-in from the Office 365 Developer patterns and practices project on GitHub.

Before you run the code sample, do the following:

  1. Edit the OneDriveUploader.xml file with the following information:

    • The location where you want to save your text and CSV log files.

    • The file path to your CSV mapping file (for example, C:\PnP\Samples\Core.BulkDocumentUploader\Input\SharePointSites.csv).

    • The location of the company policy files to upload (for example, C:\PnP\Samples\Core.BulkDocumentUploader\Input\OneDriveFiles).

    • Your SharePoint Online credentials.

    • The document action to perform (either upload or delete).

    • The new file name to apply to the file after the file is uploaded to the document library (for example, COMPANY POLICY DOCUMENT.xlsx).

  2. In the SharePointSites.csv mapping file, list the document library URL to upload files to, and the name of the company policy file to upload.

  3. Add the file path of the OneDriveUploader.xml file as a command-line argument. To do this, open the Core.BulkDocumentUploader project properties in Solution Explorer, and then choose Properties > Debug, as shown in Figure 1.

    Figure 1. Setting OneDriveUploader.xml as a command-line argument in the project properties

    Screenshot of the Core.BulkDocumentUploader properties pane with Debug highlighted.

5.2 Using the Core.BulkDocumentUploader sample app

<a name=“sectionSection1”> </a>

From the Main method in Program.cs, the RecurseActions method calls the Run method in OneDriveMapper.cs. The Run method gets the location of the file to upload from SharePointSites.csv, and then calls the IterateCollection method.

Note The code in this article is provided as-is, without warranty of any kind, either express or implied, including any implied warranties of fitness for a particular purpose, merchantability, or non-infringement.

public override void Run(BaseAction parentAction, DateTime CurrentTime, LogHelper logger)
        {
            CsvProcessor csvProcessor = new CsvProcessor();

            logger.LogVerbose(string.Format("Attempting to read mapping CSV file '{0}'", this.UserMappingCSVFile));

            using (StreamReader reader = new StreamReader(this.UserMappingCSVFile))
            {
                csvProcessor.Execute(reader, (entries, y) => { IterateCollection(entries, logger); }, logger);
            }
        }

The SharePointSite.csv file lists a file to upload and the document library to upload that file to. The IterateCollection method then does the following to upload the file to the document library:

  1. Gets the file to upload.

  2. Ensures that the user has permissions to add items.

  3. Creates the HttpWebRequest object with the authentication cookie, the REST string request to upload the document, and the HTTP request action method.

  4. Performs the file upload.

Note The file name is overwritten with the value of FileUploadName specified in OneDriveUploader.xml.

public override void IterateCollection(Collection<string> entries, LogHelper logger)
        {
            Stopwatch IterationSW = new Stopwatch();
            IterationSW.Start();

            logger.LogVerbose(string.Format(CultureInfo.CurrentCulture, "Establishing context object to: '{0}'", entries[this.SiteIndex]));

            try
            {
                // Use the context of the current iteration URL for current user item.
                using (ClientContext context = new ClientContext(entries[this.SiteIndex]))
                {
                    using (SecureString password = new SecureString())
                    {
                        foreach (char c in this.Password.ToCharArray())
                        {
                            password.AppendChar(c);
                        }

                        context.Credentials = new SharePointOnlineCredentials(this.UserName, password);

                        // Get the file to upload from the directory.
                        FileInfo theFileToUpload = new FileInfo(Path.Combine(this.DirectoryLocation + "\\", entries[this.FileIndex] + ".xlsx"));

                        logger.LogVerbose(string.Format(CultureInfo.CurrentCulture, "Attempting to {0} file {1}", this.DocumentAction, theFileToUpload));

                        // Ensure that the account has permissions to access.
                        BasePermissions perm = new BasePermissions();
                        perm.Set(PermissionKind.AddListItems);

                        ConditionalScope scope = new ConditionalScope(context, () => context.Web.DoesUserHavePermissions(perm).Value);

                        using(scope.StartScope())
                        {
                            Stopwatch tempSW = new Stopwatch();
                            tempSW.Start();

                            int success = 0;

                            while(tempSW.Elapsed.TotalSeconds < 20)
                            {
                                var digest = context.GetFormDigestDirect();

                                string cookie = ((SharePointOnlineCredentials)context.Credentials).GetAuthenticationCookie(new Uri(entries[this.SiteIndex])).TrimStart("SPOIDCRL=".ToCharArray());

                                using (Stream s = theFileToUpload.OpenRead())
                                {
                                    // Define REST string request to upload document to context. This string specifies the Documents folder, but you can specify another document library.
                                    string theTargetUri = string.Format(CultureInfo.CurrentCulture, "{0}/_api/web/lists/getByTitle('Documents')/RootFolder/Files/add(url='{1}',overwrite='true')?", entries[this.SiteIndex], this.FileUploadName);

                                    // Define REST HTTP request object.
                                    HttpWebRequest SPORequest = (HttpWebRequest)HttpWebRequest.Create(theTargetUri);

                                    // Define HTTP request action method.
                                    if (this.DocumentAction == "Upload")
                                    {
                                        SPORequest.Method = "POST";
                                    }
                                    else if (this.DocumentAction == "Delete")
                                    {
                                        SPORequest.Method = "DELETE";
                                    }
                                    else
                                    {
                                        logger.LogVerbose(string.Format(CultureInfo.CurrentCulture, "There was a problem with the HTTP request in DocumentAction attribute of XML file"));
                                        throw new Exception("The HTTP Request operation is not supported, please check the value of DocumentAction in the XML file");
                                    }

                                    // Build out additional HTTP request details.
                                    SPORequest.Accept = "application/json;odata=verbose";
                                    SPORequest.Headers.Add("X-RequestDigest", digest.DigestValue);
                                    SPORequest.ContentLength = s.Length;
                                    SPORequest.ContentType = "application/octet-stream";

                                    // Handle authentication to context through cookie.
                                    SPORequest.CookieContainer = new CookieContainer();
                                    SPORequest.CookieContainer.Add(new Cookie("SPOIDCRL", cookie, string.Empty, new Uri(entries[this.SiteIndex]).Authority));

                                    // Perform file upload/deletion.
                                    using (Stream requestStream = SPORequest.GetRequestStream())
                                    {
                                        s.CopyTo(requestStream);
                                    }

                                    // Get HTTP response to determine success of operation.
                                    HttpWebResponse SPOResponse = (HttpWebResponse)SPORequest.GetResponse();

                                    logger.LogVerbose(string.Format(CultureInfo.CurrentCulture, "Successfully '{0}' file {1}", this.DocumentAction, theFileToUpload));
                                    logger.LogOutcome(entries[this.SiteIndex], "SUCCCESS");

                                    success = 1;

                                    // Dispose of the HTTP response.
                                    SPOResponse.Close();

                                    break;
                                }
                                                       
                            }

                            tempSW.Stop();

                            if (success != 1)
                            {
                                throw new Exception("The HTTP Request operation exceeded the timeout of 20 seconds");
                            }

                        }
                    }
                }

            }
            catch(Exception ex)
            {
                logger.LogVerbose(string.Format(CultureInfo.CurrentCulture, "There was an issue performing '{0}' on to the URL '{1}' with exception: {2}", this.DocumentAction, entries[this.SiteIndex], ex.Message));
                logger.LogOutcome(entries[this.SiteIndex], "FAILURE");
            }
            finally
            {
                IterationSW.Stop();
                logger.LogVerbose(string.Format(CultureInfo.CurrentCulture, "Completed processing URL:'{0}' in {1} seconds", entries[this.SiteIndex], IterationSW.ElapsedMilliseconds/1000));
            }
        }

5.3 Additional resources

<a name=“bk_addresources”> </a>

6 Introducing the API for bulk custom user profile properties update for SharePoint Online

Applies to: SharePoint Online

As part of the new Client Side Object Model (CSOM) version (4622.1208 or newer), SharePoint has new capability for bulk importing custom user profile properties. Previously you could have taken advantage of the user profile CSOM operations for updating specific properties for user profiles, but this is not that performant and in case of thousands of profiles, the operation is too time consuming.

Since many enterprises however have business requirements to replicate custom attributes to the SharePoint user profile service a more performant user profile bulk API has been released.

6.1 Bulk user profile update flow

<a name=“sectionSection0”> </a>

Bulk UPA update flow

  1. User attributes are synchronized from the corporate Active Directory to the Azure Active Directory. You can select which attributes are being replicated cross on-premises and Azure
  2. Standardized set of attributes are being replicated from the Azure Active Directory to SharePoint user profile store at Office 365. This cannot be controlled like in the on-premises.
  3. A custom synchronization tool taking advantage of the new build update APIs. This tool uploads a JSON formatted file to Office 365 tenant and queues the import process. Implemented as managed code (.NET) or as PowerShell script using the new CSOM APIs.
  4. LOB system or any external system, which is the actual source of the information in the JSON formatted file. This could be also combination of data from Active Directory and from any external system. Notice that from an API perspective, the LOB system could be also on-premises SharePoint 2013 or 2016 deployment from where you’d synchronize user profile attributes to SharePoint online.
  5. Out of the box server side timer job running in SharePoint online, which checks for queued import requests and will perform the actual import operation based on the API calls and information in provided file.
  6. Extended user profile information is available in the user profile and can be used for any out of the box or custom functionality in the SharePoint online.

Note: Import only works for user profile properties, which has not been set to be editable for the end users. This is to avoid situation where the user profile import process would override any information which end user has already updated. Also import only allows custom properties that are not active directory core properties that typically must be synchronized to Azure Active Directory. For list of typical core directory properties see table listed later in FAQ section at this blog post.

Here’s quick demo video on using the new CSOM API from managed code or from PowerShell. You can find used code sample, including sample PowerShell script, from the Office Dev PnP Code Gallery.

<iframe id=“ytplayer” type=“text/html” width=“640” height=“390” src=“https://www.youtube.com/embed/-X_2T0SRUBk?autoplay=0&origin=https://msdn.microsoft.com” frameborder=“0”></iframe>

6.2 Import file format

<a name=“sectionSection1”> </a>

Information to be processed is provided with JSON formatted file. Here’s a structure for the file format.  

{
   "value": [
     {
       "<IdName>": "<UserIdValue_1>",
       "<AttributeName_1>": "<User1_AttributedValue_1>",
       "<AttributeName_2>": "<User1_AttributedValue_2>",
     },
     {
       "<IdName>": "<UserIdValue_2>",
       "<AttributeName_1>": "<User2_AttributedValue_1>",
       "<AttributeName_2>": "<User2_AttributedValue_2>",
     },
     {
       "<IdName>": "<UserIdValue_n>",
       "<AttributeName_1>": "<Usern_AttributedValue_1>",
       "<AttributeName_2>": "<Usern_AttributedValue_2>",
     }
   ]
}

Here’s simple example file. Identity resolution in this case will be based on the IdName property and we have three different properties, which are being updated called City and Office. File contains information for four different accounts in the specific tenant. Property names used in the source file are not locked on the names used in the SharePoint Online user profile service, since we will provide correct property mapping for the information within our code. 

{
  "value": [
    {
      "IdName": "vesaj@contoso.com",
      "Property1": "Helsinki",
      "Property2": "Viper"
    },
    {
      "IdName": "bjansen@contoso.com",
      "Property1": "Brussels",
      "Property2": "Beetle"
    },
    {
      "IdName": "unknowperson@contoso.com",
      "Property1": "None",
      "Property2": ""
    },
    {
      "IdName": "erwin@contoso.com",
      "Property1": "Stockholm",
      "Property2": "Elite"
    }
  ]
}

6.2.1 Source data restrictions

There are few restrictions on the source data as presented here:

  • Max size: 2GB
  • Max properties: 500,000
  • Source file must be uploaded to same SharePoint Online tenant where the process is started

6.3 User profile property import process

<a name=“sectionSection2”> </a>

Here’s the full process:

  1. Create or synchronize users to Office 365 tenant or to Azure AD associated to it
    • When users are synchronized to Azure AD, it will also synchronize standardized set of attributes to SharePoint online User Profile Service.
  2. Create needed custom properties in the User Profile Service
    • Since there’s no remote APIs for creating custom properties to User Profile Service this step has to be performed manually once for each of the tenants where custom user profile properties are needed.
    • Notice that only user profile properties which are not “allowed to be edit by end users” can be imported. Trying to import JSON object property to a user profile property, which is marked as “editable by end users” will result an exception when CSOM API is called.
  3. Create and upload external data file to Office 365 tenant
    • You’ll need to upload the JSON formatted data file containing the information to be updated to the particular Office 365 tenant.
    • Notice that in case of any exception during actual import process, SharePoint will provide additional logging information which is saved automatically to the same document library where the file existed with a new sub folder.
    • Cleaning of the log files and provided JSON files are on the responsibility of the custom solution using the API. You should consider the life cycle of these files in your implementation. Files are stored in the document libraries, so they will be consuming assigned storage for the site collection.
  4. Call bulk UPA Import API for queuing the import job
    • Use the CSOM API to queue up the import process. This can be achieved by executing CSOM code by using managed code or PowerShell.
    • Method call to queue up the job will require property mapping information and the location of the data file. This method execution will be fast and it will just queue up the actual import process, which will be executed as back end process in the SharePoint Online.
  5. Check status of the import job
    • You can also use remote APIs to check the status of specific import job or for all of the import jobs done recently. To be able to check status of specific call, you should store the unique job identifier which is received as return value when the job is queued up.

6.4 CSOM API for bulk import process

<a name=“sectionSection3”> </a>

6.4.1 Queue import

You can queue import process by calling QueueImportProfileProperties method located in the Office365Tenant object. This is asynchronous call in a way that it doesn’t download the source data or the import, it simply adds work item to queue for doing this later. Here’s the full signature of the method:

public ClientResult<Guid> QueueImportProfileProperties(
                          ImportProfilePropertiesUserIdType idType, 
                          string sourceDataIdProperty, 
                          IDictionary<string, string> propertyMap, 
                          string sourceUri);

6.4.1.1 Parameters

idType
Type: ImportProfilePropertiesUserIdType

The type of id to use when looking up the user profile. Possible values are Email, CloudId and PrincipalName. Referring on how to resolve the identity at the cloud, either using email, Azure AD id or principal name. Note that regardless of the type the user must already exist in the User Profile Service for import to work. It’s recommended to use the Cloud SID is the option to resolve the identifying property to ensure uniqueness.

Property mapping between ID Type and Azure AD property:

  • UPA Bulk Import ID Type 
  • Azure Directory Attribute
  • CloudSID
  • ObjectID
  • PrincipalName
  • userPrincipalName
  • Email
  • mail

sourceDataIdProperty
Type: System.String

The name of the id property in the source data. The value of the property from the source data will be used to look up the user. The User Profile Service property used for the lookup depends on the value of idType.

propertyMap
Type: IDictionary<string, string>

A map from source property name to User Profile Service property name. Note that the User Profile Service properties must already exist.

sourceUri
Type: System.String

The URI of the source data to import. This must not be transient as it may not be downloaded for some time.

6.4.1.2 Return value

Guid identifying the import job that has been queued.
Here’s a sample code to start the process with sample input file.

// Create instance to Office 365 Tenant object. Technically not needed to load though. 
Office365Tenant tenant = new Office365Tenant(ctx);
ctx.Load(tenant);
ctx.ExecuteQuery();

// Type of user identifier ["PrincipleName", "EmailAddress", "CloudId"] in the 
// User Profile Service. In this case we use email as the identifier at the UPA storage
ImportProfilePropertiesUserIdType userIdType = 
      ImportProfilePropertiesUserIdType.Email;

// Name of user identifier property in the JSON file
var userLookupKey = "IdName";

var propertyMap = new System.Collections.Generic.Dictionary<string, string>();

// First one is the property at the JSON file, 
// second is the user profile property Name at User Profile Service
// Notice that we have here 2 custom properties in UPA called 'City' and 'OfficeCode'
propertyMap.Add("City", "City");
propertyMap.Add("Office", "OfficeCode");

// Returns a GUID, which can be used to see the status of the execution and end results
var workItemId = tenant.QueueImportProfileProperties(
      userIdType, userLookupKey, propertyMap, fileUrl
      );

ctx.ExecuteQuery();

6.4.2 Check status of import job

You can also check status of the User Profile Service import jobs by using new CSOM APIs. There’s two new methods for this in the Tenant object.
You can check status of individual import job by using GetImportProfilePropertyJob method located in the Office365Tenant object. You will need to have the unique identifier of specific import job provided as a parameter for this method. Here’s the full signature of the method:

public ImportProfilePropertiesJobInfo GetImportProfilePropertyJob(Guid jobId);

6.4.2.1 Parameters

jobID
Type: Guid

The id of the job for which to get high-level status.

6.4.2.2 Return value

An ImportProfilePropertiesJobStatus object with high level status information about the specified job.

Here’s a sample code to get status of specific import job using stored identifier.

// Check status of specific request based on job id received when we queued the job
Office365Tenant tenant = new Office365Tenant(ctx);
var job = tenant.GetImportProfilePropertyJob(workItemId);
ctx.Load(job);
ctx.ExecuteQuery();

You can check status of all import jobs by using GetImportProfilePropertyJobs method located in the Office365Tenant object. Here’s the full signature of the method:

public ImportProfilePropertiesJobStatusCollection GetImportProfilePropertyJobs(); 

Return value
An ImportProfilePropertiesJobStatusCollection object which is collection of ImportProfilePropertiesJobStatus objects with high level status information about the specified jobs.

Here’s a sample code to get status of all import jobs currently saved in the tenant. These could be already processed or queued jobs.

// Load all import jobs – old and queued ones
Office365Tenant tenant = new Office365Tenant(ctx);
var jobs = tenant.GetImportProfilePropertyJobs();
ctx.Load(jobs);
ctx.ExecuteQuery();
foreach (var item in jobs)
{
   // Check whatever properties needed
   var state = item.State;
}

ImportProfilePropertiesJobStatus object returned with the import status information has following properties. 

JobId - Guid
The Id of the import job

State - ImportProfilePropertiesJobState
An enum that has the following values:

  • Unknown We cannot determine the state of the job
  • Submitted The job has been submitted to the system
  • Processing The job is being processed
  • Queued The job has passed validation and queued for import to UPA
  • Succeeded The job completed with no error
  • Error The job completed with error

SourceUri - Uri
The URI to the data source file

Error - ImportProfilePropertiesJobError
An enum representing the possible error:

  • NoError No error found
  • InternalError The error caused by a failure in the service
  • DataFileNotExist The data source file cannot be found
  • DataFileNotInTenant The data source file does not belong to the same tenant
  • DataFileTooBig The size of the data file is too big
  • InvalideDataFile The data source file does not pass the validation. There might be more detailes in the log file
  • ImportCompelteWithErrors The data has been imported, but there is some error encountered

ErrorMessage String
The eror message

LogFileUri - Uri
The Uri to the folder where the logs have been written

6.5 Calling Import API from PowerShell

<a name=“sectionSection4”> </a>

You can take advantage of the User Profile Service bulk import API with PowerShell. This means that you’ll use the CSOM code directly in the PowerShell script with the needed parameters. This requires that the updated CSOM redistributable package has been installed on the used computer where the script is executed.
By using PowerShell, you do not need to specifically combine your code within Visual Studio, which could be more suitable model for some customers depending on the exact business scenarios.

6.5.1 Sample script

Here’s a sample PowerShell script which performs the same operation as the code previously in this document. 

# Get needed information from end user
$adminUrl = Read-Host -Prompt 'Enter the admin URL of your tenant'
$userName = Read-Host -Prompt 'Enter your user name'
$pwd = Read-Host -Prompt 'Enter your password' -AsSecureString
$importFileUrl = Read-Host -Prompt 'Enter the URL to the file located in your tenant'

# Get instances to the Office 365 tenant using CSOM
$uri = New-Object System.Uri -ArgumentList $adminUrl
$context = New-Object Microsoft.SharePoint.Client.ClientContext($uri)

$context.Credentials = New-Object Microsoft.SharePoint.Client.SharePointOnlineCredentials($userName, $pwd)
$o365 = New-Object Microsoft.Online.SharePoint.TenantManagement.Office365Tenant($context)
$context.Load($o365)

# Type of user identifier ["Email", "CloudId", "PrincipalName"] in the User Profile Service
$userIdType=[Microsoft.Online.SharePoint.TenantManagement.ImportProfilePropertiesUserIdType]::Email

# Name of user identifier property in the JSON
$userLookupKey="idName"

# Create property mapping between on-premises name and O365 property name
# Notice that we have here 2 custom properties in UPA called 'City' and 'OfficeCode'
$propertyMap = New-Object -type 'System.Collections.Generic.Dictionary[String,String]'
$propertyMap.Add("Property1", "City")
$propertyMap.Add("Property2", "OfficeCode")

# Call to queue UPA property import 
$workItemId = $o365.QueueImportProfileProperties($userIdType, $userLookupKey, $propertyMap, $importFileUrl);

# Execute the CSOM command for queuing the import job
$context.ExecuteQuery();

# Output unique identifier of the job
Write-Host "Import job created with following identifier:" $workItemId.Value 

6.6 Exception process

<a name=“sectionSection5”> </a>
There’s two level of checking of the provided information when this API is used. When you queue up the import process with the CSOM, there will be initial level of checking on the provided values, like confirmation that the provided mapping properties exists in the User Profile Service and that property is not editable by the end user. When queue API is called, only initial level of verification is applied and final verification of the provided information is performed when the import job is actually executed.

If there’s any exceptions during the actual import job execution, additional logging file with needed details is generated to same document library where the import file was located. Log files for specific import job are being saved to sub folders named using the unique identifier of the specific import job.

Here’s an example result from one import in the document library where the import file was located. In the below picture you can see two sub folders for two different executions created on the document library where the import file is stored.

Exception process

Actual log file is saved in the sub folder and you can download that from the Office 365 for detailed analyses. 

Exception process

6.6.1 Common exceptions and description

Following table contains typical exceptions which you could encounter when you start using the User Profile Service bulk API.

ExceptionProperty Names [AboutMe] are editable by user.
This would be thrown by the CSOM API when you call ExecuteQuery method for submitting the job to your tenant. API will check that all properties currently being mapped for the property mapping are NOT user editable. Exception will point out the property which is cannot be used. In this example case we have tried to map a JSON property to AboutMe property in the User Profile Service properties, but this is not allowed, since AboutMe is user editable property.

ExceptionInvalidProperty - Property ‘AboutMe’ is not mapped to any property in the user profile application.
JSON data file contained a property which has not been mapped to the User Profile Service property in SharePoint Online. This means that source data file is containing properties, which you have not assigned proper mapping. You will need to have mapping definition provided for the QueueImportProfileProperties method for each of the properties in the JSON data object.

ExceptionMissingIdentity - The identity is missing for the user object
Identity property could not be found from the user object. Most likely cause is that the sourceDataIdProperty attribute is wrongly set for the QueueImportProfileProperties method.Ensure that you have right property in the JSON source file and that your code/script is assigning this attribute accordingly based on the data file content.

ExceptionIdentityNotResolvable User identity cannot be resolved
Data file contained an identity, which could not be resolved or was not present in the User Profile Service. In this case the user profile with email of could not be located in the User Profile Service.

ExceptionDataFileNotJson - JsonToken EndObject is not valid for closing JsonType Array. Path ‘value’, line 8, position 10.
Your import file format is not valid JSON and does not match the excepted format. 

6.7 Questions and answers

<a name=“sectionSection6”> </a>

Can I execute the code using app-only/add-in only permissions?
Yes – this is absolutely possible. You’ll need to register client id and secret to be able to execute the APIs. Since actual import of the file does not occur synchronously with the identity of the caller, this works without any issues.

This API is updating properties in the User Profile Service, but how would I create those properties in the tenant?
There’s no remote API to create custom user profile properties programmatically, so this is manual operation which needs to be completed once per given tenant. You can refer to this article for instructions on how to create these custom properties.

Is this capability available in the on-premises SharePoint?
This capability is unfortunately only for SharePoint Online at least for now. In on-premises SharePoint this capability would be useful but not as much needed, since you can modify attribute mapping in the on-premises user profile service application. You could also take advantage of importing user profile attributes using Business Continuity Service (BCS) in SharePoint 2013, but this option is not available in SP2016, which means that in the case of SP2016 you only option currenly is to implement customizations which take advantage of the user profile web services.

Could I use this API for synchronizing user profile property values from my on-premises SharePoint 2013 or 2016 to SharePoint Online?
Yes. On-premises SharePoint would be considered as any source system. You’d have to export the user profile values from your on-premises SharePoint to JSON file format and process would be exactly the same as importing values from any other system.

Can I import string based multi-value properties?
No. This is not currently supported with this API.

What permissions are required for executing this API??
You will need to have Global Admin permissions currently. SharePoint Admin is not sufficient.

Can I import taxonomy based properties?
No. This is not currently supported with this API.

What if I define mapping in the code which is not used or have property in the JSON which is not mapped?
If your code/script defines a mapping which is not used or data file does not contain properties for that mapping, execution will continue without any exceptions and import will be applied based on mapped properties. If you however have property in the JSON file which is not mapped, import process will be aborted and exception details will be provided in the log file for the specific job execution.

What if I have a need to update custom properties that are beyond the size limitations of this bulk API (i.e. >2 GB file or >500,000 properties)?
You would need to batch your jobs accordingly by triggering multiple jobs in sequence i.e. finishing one job at a time with the maximum limit on this API. You should expect these high bandwidth imports will take a long time to complete. Also, you should optimize the import jobs only for delta changes in custom profile properties rather than importing full set of values in all jobs.

Which Azure Active Directory attributes are being sync’d to SharePoint Online user profile by default?
See following table for the official list of synchronized attributes and their mapping between Azure Active Directory and SharePoint Online User Profile.

Azure Directory Attribute  SharePoint Online Profile Property
ObjectSid SPS-SavedSID
msonline-UserPrincipalName UserName
msonline-UserPrincipalName AccountName
msonline-UserPrincipalName SPS-ClaimID
msonline-UserPrincipalName SPS-UserPrincipalName
GivenName FirstName
sn LastName
Manager Manager
DisplayName PreferredName
telephoneNumber WorkPhone
proxyAddresses WorkEmail
proxyAddresses SPS-SIPAddress
PhysicalDeliveryOfficeName Office
Title Title
Title SPS-JobTitle
Department Department
Department SPS-Department
ObjectGuid ADGuid
WWWHomePage PublicSiteRedirect
DistinguishedName SPS-DistinguishedName
msOnline-ObjectId msOnline-ObjectId
PreferredLanguage SPS-MUILanguages
msExchHideFromAddressList SPS-HideFromAddressLists
msExchRecipientTypeDetails SPS-RecipientTypeDetails
msonline-groupType IsUnifiedGroup
msOnline-IsPublic IsPublic
msOnline-ObjectId msOnline-ObjectId
msOnline-UserType SPS-UserType
GroupType GroupType
SPO-IsSharePointOnlineObject SPO-IsSPO

7 Call web services from SharePoint workflows

Deploy a SharePoint 2013 workflow to the host web from an add-in for SharePoint, and call web services from SharePoint workflows.

Applies to: Office 365 | SharePoint 2013 | SharePoint Online

You can use the SharePoint 2013 add-in model to create and deploy workflows that run on either the add-in web or the host web. These workflows can interact with the remotely hosted portions of provider-hosted add-ins. The workflows can also call remote web services that contain important business data in one of two ways:

  • By passing query information to the remotely hosted portion of the add-in. The remote web application then calls the web service and passes the information back to SharePoint.

  • By querying the web service by using the SharePoint 2013 web proxy. The workflow passes the results of the query to the remotely hosted portion of the add-in, which then passes the information to SharePoint.

The information retrieved from the web service can be stored in SharePoint lists.
This article describes three code samples that show you how to call web services from workflows, as listed in the following table. In the first two samples, the workflows and the lists are deployed to the add-in web when the add-in installs. The last sample provides the basic shell of a workflow and instructions for how to deploy it to the host web and associate it with a list on the host web.

Workflow tasks and associated samples

Task Sample
Call custom web services from a workflow Workflow.CallCustomService
Call a custom web service from a workflow and update SharePoint by using the SharePoint web proxy Workflow.CallServiceUpdateSPViaProxy
Associate a workflow with the host web Workflow.AssociateToHostWeb

7.1 Call custom web services from a workflow

<a name=“bk1”> </a>

The Workflow.CallCustomService sample shows you how to create a workflow that calls a custom web service that updates SharePoint list data. It also shows you how to design a provider-hosted add-in so that it queries a web service by using the remotely hosted web application that deploys with the add-in. This sample is useful when you want all the interactions with the web service to be handled by the remotely hosted portion of your provider-hosted add-in.

The sample works by starting a workflow from a remote web application. This workflow passes query information submitted by the user to the remote web application, which then uses that information to construct a query to the Northwind OData web service. The query returns the product suppliers for a given country. After it receives that information, the remote web application updates a product suppliers list that the add-in has deployed to the add-in web.

Note The Workflow.CallCustomService sample page contains instructions for deploying this add-in. You can also deploy and test with F5 debugging in Visual Studio if you follow the instructions in the blog post Debugging SharePoint 2013 workflows using Visual Studio 2013.

This app’s start page includes a drop-down menu from which you can select a country for which you want to create a product suppliers list (Figure 1).

Figure 1. Workflow.CallCustomService sample add-in start page

Screenshot that shows the Start page of the sample app

The Create button on the screen calls a Create method in the Controllers\PartSuppliersController.cs file that creates a new entry in the Part Suppliers list on the add-in web. The Create method then calls the Add method that is defined in the Services\PartSuppliersService.cs file. The sequence is shown in the following two code examples.

Create method

public ActionResult Create(string country, string spHostUrl)
        {
            var spContext = SharePointContextProvider.Current.GetSharePointContext(HttpContext);
            using (var clientContext = spContext.CreateUserClientContextForSPAppWeb())
            {
                var service = new PartSuppliersService(clientContext);
                var id = service.GetIdByCountry(country);
                if (id == null)
                {
                    id = service.Add(country);
                    TempData["Message"] = "Part Supplier Successfully Created!";
                }
                else
                    TempData["ErrorMessage"] = string.Format("Failed to Create The Part Supplier: There's already a Part Supplier who's country is {0}.", country);

                return RedirectToAction("Details", new { id = id.Value, SPHostUrl = spHostUrl });
            }
        }

Add method

public int Add(string country)
        {
            var item = list.AddItem(new ListItemCreationInformation());
            item["Country"] = country;
            item.Update();
            clientContext.ExecuteQuery();
            return item.Id;
        }

After creating that new list item, the add-in presents a button that starts the approval workflow, as shown in Figure 2.

Figure 2. Start Workflow button in the sample app

Screenshot that shows the Start Workflow page in the sample app

Choosing the Start Workflow button triggers the StartWorkflow method that is defined in the Controllers\PartSuppliersController.cs file. This method packages the add-in web URL, the web service URL (for your remotely hosted web application, not for the Northwind web service), and the context token values, and passes them to the StartWorkflow method. The PartSuppliersService method will need the context token to interact with SharePoint.

public ActionResult StartWorkflow(int id, Guid workflowSubscriptionId, string spHostUrl)
        {
            var spContext = SharePointContextProvider.Current.GetSharePointContext(HttpContext) as SharePointAcsContext;

            var webServiceUrl = Url.RouteUrl("DefaultApi", new { httproute = "", controller = "Data" }, Request.Url.Scheme);
            var payload = new Dictionary<string, object>
                {
                    { "appWebUrl", spContext.SPAppWebUrl.ToString() },
                    { "webServiceUrl", webServiceUrl },
                    { "contextToken",  spContext.ContextToken }
                };

            using (var clientContext = spContext.CreateUserClientContextForSPAppWeb())
            {
                var service = new PartSuppliersService(clientContext);
                service.StartWorkflow(workflowSubscriptionId, id, payload);
            }

            TempData["Message"] = "Workflow Successfully Started!";
            return RedirectToAction("Details", new { id = id, SPHostUrl = spHostUrl });
        }

The StartWorkflow method then creates a workflow instance and passes the three values (appWebUrl, webServiceUrl, contextToken) stored in the payload variable to the workflow.

 {
            var workflowServicesManager = new WorkflowServicesManager(clientContext, clientContext.Web);

            var subscriptionService = workflowServicesManager.GetWorkflowSubscriptionService();
            var subscription = subscriptionService.GetSubscription(subscriptionId);

            var instanceService = workflowServicesManager.GetWorkflowInstanceService();
            instanceService.StartWorkflowOnListItem(subscription, itemId, payload);
            clientContext.ExecuteQuery();
        }

After the workflow starts, it makes a POST HTTP request to the remotely hosted web application. This request tells the web application to update the suppliers list with the suppliers for the country that the user has just added. The Controllers\DataController.cs file contains a POST method that receives the contents of this request.

public void Post([FromBody]string country)
        {
            var supplierNames = GetSupplierNames(country);
            UpdateSuppliers(country, supplierNames);
        }

The GetSupplierNames method (in the Controllers\DataController.cs file) constructs and executes a LINQ query to the Northwind OData web service for all the suppliers associated with the selected country. The UpdateSuppliers method then updates theSuppliers field of the newly added list item, as shown in the following two code examples.

Query Northwind

private string[] GetSupplierNames(string country)
        {
            Uri uri = new Uri("http://services.odata.org/V3/Northwind/Northwind.svc");
            var entities = new NorthwindEntities(uri);
            var names = entities.Suppliers
                .Where(s => s.Country == country)
                .AsEnumerable()
                .Select(s => s.CompanyName)
                .ToArray();
            return names;
        }

Update suppliers list

private void UpdateSuppliers(string country, string[] supplierNames)
        {
            var request = HttpContext.Current.Request;
            var authority = request.Url.Authority;
            var spAppWebUrl = request.Headers["SPAppWebUrl"];
            var contextToken = request.Headers["SPContextToken"];

            using (var clientContext = TokenHelper.GetClientContextWithContextToken(
                spAppWebUrl, contextToken, authority))
            {
                var service = new PartSuppliersService(clientContext);
                service.UpdateSuppliers(country, supplierNames);
            }
        }

If you look at the design view of the workflow.xaml file in the Approve Suppliers directory of the app project, you’ll see (by choosing the Arguments tab at the bottom left of the design view) that the workflow stores the three values in thepayload variable that is passed to it as workflow arguments (Figure 3).

Figure 3. Payload arguments passed to the workflow

Screenshot that shows the screen for entering the payload arguments passed to the workflow

The HttpSend activity occurs before workflow approval. This activity sends the POST query to your remote web application that triggers the call to the Northwind web service and then the list item update (with the suppliers list). This activity is configured to send the request to thewebServiceUrl value that was passed as a workflow argument (Figure 4).

Figure 4. HttpSend activity Uri value

Screenshot that shows text box for entering the HTTP Send web service URL

The POST request also passes the country value that is stored in the list item on which the workflow is operating (Figure 5).

Figure 5. Property grid for the HttpSend activity

Screenshot that shows the properties grid for the HTTP Send activity

The workflow sends the appWebUrl andcontextToken values to the web application through the request headers (Figure 6). The headers also set the content types for sending and accepting requests.

Figure 6. Request headers for the HttpSend activity

Screenshot that shows the grid for adding HTTP Send activity request headers

If the workflow is approved, it changes the value of the isApproved field of the list item to true.

7.2 Call a custom web service from a workflow and update SharePoint by using the SharePoint web proxy

<a name=“bk2”> </a>

The Workflow.CallServiceUpdateSPViaProxy sample shows how to design a provider-hosted add-in to query a web service and then pass that information to a SharePoint list via the SharePoint 2013 web proxy.

The sample shows a task that is useful when you want to encapsulate all the interactions with a web service so that they are handled directly by the workflow. Using the web proxy makes it easier to update the remote web application logic without having to update the workflow instance. If you’re not using the proxy and you have to update the logic in your web application, you’ll have to remove the existing workflow instances and then redeploy the add-in. For this reason, we recommend this design when you need to call a remote web service.

Note The Workflow.CallCustomServiceUpdateViaProxy sample page contains instructions for deploying this add-in. You can also deploy and test the add-in by using F5 debugging in Visual Studio if you follow the instructions in the blog post Debugging SharePoint 2013 workflows using Visual Studio 2013.

The sample starts a workflow from a remote web application. This workflow passes query information submitted by the user to the Northwind OData web service. The query returns the product suppliers for a given country. After it receives the web service response, the workflow passes the information from the response to the remote web application. The remote web application then updates a product suppliers list that the add-in has deployed to the add-in web.

When you start the app, the start page includes a drop-down menu from which you can select a country for which you want to create a product suppliers list (Figure 7).

Figure 7. Workflow.CallServiceUpdateSPViaProxy sample add-in start page

Screenshot that shows the start page for the sample add-in with update for proxy workflow app

That button calls a method in the Controllers\PartSuppliersController.cs file that creates a new entry in the Part Suppliers list on the add-in web. The Create method in that file calls the Add method that is defined in the Services\PartSuppliersService.cs file. Both are shown in the following two code examples.

Create method

public ActionResult Create(string country, string spHostUrl)
        {
            var spContext = SharePointContextProvider.Current.GetSharePointContext(HttpContext);
            using (var clientContext = spContext.CreateUserClientContextForSPAppWeb())
            {
                var service = new PartSuppliersService(clientContext);
                var id = service.GetIdByCountry(country);
                if (id == null)
                {
                    id = service.Add(country);
                    TempData["Message"] = "Part Supplier Successfully Created!";
                }
                else
                    TempData["ErrorMessage"] = string.Format("Failed to Create The Part Supplier: There's already a Part Supplier who's country is {0}.", country);

                return RedirectToAction("Details", new { id = id.Value, SPHostUrl = spHostUrl });
            }
        }

Add method

public int Add(string country)
        {
            var item = list.AddItem(new ListItemCreationInformation());
            item["Country"] = country;
            item.Update();
            clientContext.ExecuteQuery();
            return item.Id;
        }

After it creates that new list item, the add-in presents a button that starts the approval workflow (Figure 8).

Figure 8. Start Workflow button

Screenshot that shows the Start Workflow page in custom web service

Choosing the Start Workflow button triggers the StartWorkflow method in the Controllers\PartSuppliersController.cs file. This method packages the add-in web URL and the web service URL (for your remotely hosted web application, not for the Northwind web service) and passes them to the StartWorkflow method in the Services\PartSuppliersService.cs file. The workflow is going to communicate with the remote web application via the web proxy, and the web proxy will add the access token in a request header. This is why the workflow doesn’t pass a context token to the StartWorkflow method in this sample. The code is shown in the following example.

public ActionResult StartWorkflow(int id, Guid workflowSubscriptionId, string spHostUrl)
        {
            var spContext = SharePointContextProvider.Current.GetSharePointContext(HttpContext);

            var webServiceUrl = Url.RouteUrl("DefaultApi", new { httproute = "", controller = "Data" }, Request.Url.Scheme);
            var payload = new Dictionary<string, object>
                {
                    { "appWebUrl", spContext.SPAppWebUrl.ToString() },
                    { "webServiceUrl", webServiceUrl }
                };

            using (var clientContext = spContext.CreateUserClientContextForSPAppWeb())
            {
                var service = new PartSuppliersService(clientContext);
                service.StartWorkflow(workflowSubscriptionId, id, payload);
            }

            TempData["Message"] = "Workflow Successfully Started!";
            return RedirectToAction("Details", new { id = id, SPHostUrl = spHostUrl });
        }

The StartWorkflow method creates a workflow instance and passes the two values (appWebUrl andwebServiceUrl) stored in the payload variable to the workflow.

public void StartWorkflow(Guid subscriptionId, int itemId, Dictionary<string, object> payload)
        {
            var workflowServicesManager = new WorkflowServicesManager(clientContext, clientContext.Web);

            var subscriptionService = workflowServicesManager.GetWorkflowSubscriptionService();
            var subscription = subscriptionService.GetSubscription(subscriptionId);

            var instanceService = workflowServicesManager.GetWorkflowInstanceService();
            instanceService.StartWorkflowOnListItem(subscription, itemId, payload);
            clientContext.ExecuteQuery();
        }

After the workflow starts, and before it is approved, the workflow makes a query to the Northwind web service to retrieve the suppliers list for the country that you selected. It does this by using an HTTPSend activity that sends an OData query to this endpoint: "http://services.odata.org/V3/Northwind/Northwind.svc/Suppliers/?$filter=Country eq '" + country.Replace("'", "''") + "'&amp;$select=CompanyName". The HttpSend activity should be configured as a GET request with an Accept header that specifies JSON with no metadata: application/json;odata=nometadata (Figures 9 and 10).

Figure 9. HttpSend activity configuration

Screenshot that shows the HTTP Send activity grid configured as a GET request

Figure 10. HttpSend activity request headers

Screenshot that shows the Request Headers grid for the HTTP Send activity

If the user selected Canada for the new supplier list item, for example, the JSON-formatted response will be as shown in the following example.

{
    value: [
        {
            CompanyName: "Ma Maison"
        },
        {
            CompanyName: "Forêts d'érables"
        }
    ]
}

After the workflow starts, it makes a POST HTTP request that contains the suppliers list to the remotely hosted web application via the proxy. It does this via an HttpSend activity that queries the web proxy URL: appWebUrl + "/_api/SP.WebProxy.invoke". The workflow then passes the supplier list that it has received from the Northwind service by building and passing a custom service payload. The Create Custom Service Payload activity properties contain the supplier list and the ID for the supplier country, as shown in Figure 11.

Figure 11. Create Custom Service Payload activity

Screenshot that shows property and value grids for a custom web service payload activity

The ** Create WebProxy Payload** activity constructs a payload that passes the contents of this payload to the web proxy URL (Figure 12).

Figure 12. Create WebProxy Payload configuration

Screenshot that shows the Create WebProxy Payload activity dialog

The properties for this activity specify the add-in web URL, the POST request content length and type, and the request acceptance type via request headers (Figure 13).

Figure 13. WebProxy Payload activity properties grid

Screenshot that shows the property grid for the WebProxy Payload activity

After the workflow has constructed the payload and the request, it passes the request to the web proxy by using an HttpSend activity that’s configured as a POST request to the web proxy URL. The request headers specify JSON-formatted OData in the Content-Type and Accept headers (Figure 14).

Figure 14. Properties for the HttpSend activity

Screenshot that shows the Request Headers dialog for the HTTP Send activity

The Post method inside the Controllers\DataController.cs file accepts the contents of the request that the workflow sends through the web proxy. The Post method in the previous sample called a method for retrieving the supplier list from Northwind as well as one for updating the corresponding SharePoint supplier list. Since the workflow in this sample has already queried the Northwind service, this version of the method needs only to update the SharePoint list. It also passes the add-in web URL and the access token (which is passed by the web proxy) to the UpdateSuppliers method in the Services\PartSuppliersService.cs file, as shown in the following code example.

public void Post(UpdatePartSupplierModel model)
        {
            var request = HttpContext.Current.Request;
            var authority = request.Url.Authority;
            var spAppWebUrl = request.Headers["SPAppWebUrl"];
            var accessToken = request.Headers["X-SP-AccessToken"];

            using (var clientContext = TokenHelper.GetClientContextWithContextToken(spAppWebUrl, accessToken, authority))
            {
                var service = new PartSuppliersService(clientContext);
                service.UpdateSuppliers(model.Id, model.Suppliers.Select(s => s.CompanyName));
            }
        }

The UpdateSuppliers method in the PartSuppliers.cs file updates theSuppliers field of the newly created list item.

public void UpdateSuppliers(int id, IEnumerable<string> supplierNames)
        {
            var item = list.GetItemById(id);
            clientContext.Load(item);
            clientContext.ExecuteQuery();

            string commaSeparatedList = String.Join(",", supplierNames);
            item["Suppliers"] = commaSeparatedList;
            item.Update();
            clientContext.ExecuteQuery();
        }

If the workflow is approved, it changes the value of the isApproved field of the list item to true.

7.3 Associate a workflow with the host web

<a name=“bk3”> </a>

The Workflow.AssociateToHostWeb sample shows you how to deploy a workflow to the host web and associate it with a list on the host web by using tools in Visual Studio 2013. The instructions for this sample show you how to create a workflow in Visual Studio, deploy it to the host web, and associate it with a list on the host web.

The sample contains a simple workflow that can be associated with any list. The instructions for deploying this workflow show you how to work around the current limitations of the Visual Studio workflow tools by packaging the app, opening it up and editing a configuration file, and then repackaging it manually before deploying it to the host web.

When you open this project in Visual Studio, you’ll see that it is a simple, generic workflow that is designed to work with any SharePoint list. Other than the workflow task list, it doesn’t deploy any list with which it can be associated.

Note You cannot perform the task shown in this sample by using Visual Studio 2013. This sample provides a useful workaround. If the Visual Studio tools are updated in the future, you might not need to use this workaround.

7.3.1 Deploy a workflow to the host web

  1. Open the shortcut menu (right-click) for the Workflow.AssociateToHostWeb add-in project in the project explorer, and select Publish. You’ll see a window that contains a Package the app button, as shown in Figure 15.

    Figure 15. Publish your add-in screen

    Screenshot that shows the Publish your app page for publishing the sample app

  2. When you choose Package the app, Visual Studio creates a Workflow.AssociateToHostWeb.app file in the bin\Debug\app.publish\1.0.0.0 directory of your solution. This .app file is a type of zip file.

  3. Extract the contents of the file by first changing the file extension to .zip.

  4. In the directory that you’ve extracted, locate and open the XML file named WorkflowManifest.xml. The file is empty.

  5. Add the following XML fragment to the file and then save the file.

      <SPIntegratedWorkflow xmlns="http://schemas.microsoft.com/sharepoint/2014/app/integratedworkflow">
        <IntegratedApp>true</IntegratedApp>
      </SPIntegratedWorkflow>
  6. Select all the files in the extracted folder, and then open the shortcut menu (right-click) for the files and select Send to > Compressed (zipped) folder.

  7. On the zip file you just created, change the file extension to .app. You should now have a new Workflow.AssociateToHostWeb.app package that contains the updated WorkflowManifest.xml file.

  8. Add the add-in to your app catalog.

  9. Install the add-in to your host site.

  10. Go to a list on your host site and select the List editing option at the top left of the page. You’ll see a Workflow Settings drop-down menu (Figure 16).

    Figure 16. Workflow settings for a list

    Screenshot that shows workflow settings for a list

  11. Select ** Add a Workflow** from the drop-down menu.

  12. You will now see a selection option similar to the image in Figure 17. Select the Workflow.AssociateToHostWeb app from the list of available options.

    Figure 17. Add a workflow settings

    Screenshot that shows the Add a Workflow settings page

You have now deployed the workflow to the host web and associated it with a list on the host web. You can trigger a workflow manually, or you can update the workflow in Visual Studio so that it is triggered in other ways.

7.4 Additional resources

<a name=“bk_addresources”> </a>

8 Composite business apps for SharePoint 2013 and SharePoint Online

Use composite business apps to integrate your SharePoint solutions with your business processes and technologies. Decide whether a SharePoint-hosted or provider-hosted add-in is the right choice for your solution.

Applies to: Office 365 | SharePoint 2013 | SharePoint Online

Composite business apps are apps that are tightly integrated with your business processes and line-of-business (LOB) technologies (like databases and web services). These apps typically include a number of complex interactions with users and with other technologies.
The sample composite business apps described in this section provide building blocks that you can use to integrate your technologies and processes with the SharePoint 2013 add-in model.

8.1 SharePoint-hosted vs. provider-hosted add-ins

<a name=“sectionSection0”> </a>

Before you create composite business apps, you first need to decide where the apps will be hosted. SharePoint-hosted add-ins work best when you can scope your requirements to single-site implementations that you can handle with JavaScript. Provider-hosted add-ins are better for more complex business requirements.

The following table summarizes the factors to consider when you decide where to host your apps.

SharePoint-hosted add-ins Provider-hosted add-ins
You can do everything you need to do with JavaScript. You need to use languages other than JavaScript.
The add-in does not need to do any work across more than one site; for example, team calendar apps and featured news rotators. The add-in needs to access information and do work across more than a single site. For example, site collection provisioning apps.
Content is sensitive and needs to stay securely and entirely in SharePoint. The add-in needs to integrate with other line-of-business technologies.
The add-in requires elevated permissions that are made possible by the app-only policy.
The add-in requires a highly customized UI.

8.2 In this section

<a name=“sectionSection1”> </a>

Article Sample Shows you how to…
Migrate InfoPath forms to SharePoint 2013 Migrate your InfoPath 2013 forms to other supported technologies.
Data storage options in SharePoint Online Core.DataStorageModels Use different types of storage models to store your SharePoint Online data.
Corporate event add-in integration with SharePoint BusinessApps.CorporateEventsApp Use a provider-hosted add-in to implement complex business tasks.
Call web services from SharePoint workflows <p>Workflow.CallCustomService</p><p>Workflow.CallServiceUpdateSPViaProxy</p><p>Workflow.AssociateToHostWeb</p> Use provider-hosted apps to call remote web services that contain business data.

8.3 Additional resources

<a name=“bk_addresources”> </a>

9 Configure Office 365 API Projects for Distribution

9.0.1 Summary

This page explains some steps developers should consider taking on their projects that leverage the Office 365 APIs prior to distributing them to other developers, their customers, or to source control systems such as Team Foundation Server, Git or Visual Studio Online.

Specifically this page will look at two steps:

10 Fixup Azure AD Graph Client NuGet Package Reference

All projects that leverage the Office 365 API SDKs by way of adding a connected service include a NuGet package that adds Office 365 & Azure AD references to the project created in Visual Studio.

The NuGet package added to the project by the Office 365 API Tools in Visual Studio is not present in the NuGet package registry and therefore attempts to perform a NuGet package restore will fail because it cannot find a matching package.

10.1 Understanding the Problem

The Office 365 API Tools for Visual Studio 2013, version 1.3.41104.1, adds multiple NuGet packages to projects as part of completing the Add Connected Service wizard. One package in particular presents a challenge: Microsoft Azure Active Directory Graph Client Library.

The way Visual Studio works is that it, or addins, typically contain a local copy of the NuGet package so developers do not always have to be connected to the internet to download the NuGet packages. The package that the tools include has an ID of Microsoft.Azure.ActiveDirectory.GraphClient & a version of 1.0.22.

When projects are committed to source control, typically the packages are not included as part of the commit because they can add a lot of extra storage space demands and unnecessarily increase the size of a package when sharing it with other developers. Therefore one of the first tasks developers do after getting a copy of the project from source control is to run NuGet package restore.

The challenge is that a package with the same ID & version does not exist in the NuGet package registry; there is no package with an ID of Microsoft.Azure.ActiveDirectory.GraphClient & a version of 1.0.22. The package does exist in the NuGet package registry, Microsoft.Azure.ActiveDirectory.GraphClient, but under different versions.

10.2 Fixing the Azure AD Graph Client NuGet Package Reference

Until the Office 365 API Tools for Visual Studio 2013 are updated to fix this issue, it is recommended to alter the project prior to committing to your source control system, regardless if you are using Team Foundation Server, Visual Studio Online, Git or any other solution.

After creating the project, look within the project’s packages.config file and search for a package with an ID of Microsoft.Azure.ActiveDirectory.GraphClient & version of 1.0.22. The safest way to update the project is to uninstall & then reinstall the package.

Open the Package Manager Console in Visual Studio and enter the following to uninstall the package:

powershell PM> Uninstall-Package -Id Microsoft.Azure.ActiveDirectory.GraphClient

If the uninstall throws an error about not finding the package, simply remove the package reference from the packages.config file manually & save your changes.

Now, install the public version of the same NuGet package from the public registry:

powershell PM> Install-Package -Id Microsoft.Azure.ActiveDirectory.GraphClient -Version 2.0.2

The above example references a specific version of the Azure AD graph client that is known to work with the Office 365 APIs. Future versions may work so omitting the -Version argument is optional.

back to top

11 Cleaning the web.config for App-Specific Details

The Office 365 API Tools for Visual Studio add the ability to create a new Azure AD application with the necessary permissions for the Office 365 APIs using the Connected Service wizard in Visual Studio. When completing the wizard, multiple entries and customizations are made to the project’s web.config file.

These modifications include the following add-in settings:

  • ida:ClientID: The unique ID of the application created in your Azure AD tenant.
  • ida:Password: The Azure AD application’s key that was generated by the Connected Service wizard.
  • ida:AuthorizationUri: The endpoint used to authenticate with Azure AD.

The ida:ClientID and ida:Password are unique to the Azure AD app. Some development teams may elect for each developer to code against their own app, similar to how developers work against their own local development database. Therefore you can think of the the ida:ClientID and ida:Password similar to database connection strings.

The next time a developer uses the Connected Service wizard to create a new Azure AD application of the project, the wizard will detect the ida:CliendID and try to connect to an application in the current user’s Azure AD tenant. If a match is not found, the Connected Service wizard will throw an error.

Therefore, prior to committing the project to source control or before sharing with other developers, it is recommended to remove the values from the ida:ClientID & ida:Password add-in settings in the web.config.

back to top


11.0.2 Applies to

  • Office 365 Multi Tenant (MT)
  • Office 365 Dedicated (D)

11.0.3 Author

Andrew Connell - @andrewconnell

11.0.4 Version history

Version Date Comments
0.1 December 31, 2014 First draft

12 Configure SharePoint Provider-Hosted Add-ins for Distribution

12.0.1 Summary

This page explains issues that may arise when sharing a SharePoint Provider-Hosted application with other developers or when obtaining a copy from a source control system such as Team Foundation Server, Git or Visual Studio Online.

13 Configure SharePoint Provider-Hosted Add-ins for Distribution

All SharePoint Provider-Hosted add-ins created using Visual Studio 2013 include a NuGet package that adds SharePoint-specific code and references to the web application that serves as the RemoteWeb for the SharePoint add-in.

The NuGet package added to the web application project by the Office Developer Tools in Visual Studio is not present in the NuGet package registry and therefore attempts to perform a NuGet package restore will fail because it cannot find a matching package.

13.1 Understanding the Problem

The Office Developer Tools for Visual Studio 2013, version 12.0.31105, adds a NuGet package to web applications created as the RemoteWeb for SharePoint Provider-Hosted add-ins. This package, the App for SharePoint Web Toolkit, adds the following things to the web project:

  • Assemblies & references to the SharePoint Client-Side Object Model (CSOM) assemblies
  • A code file TokenHelper.cs that assists in the authentication process for add-ins.
  • A code file SharePointContext.cs that helps in creating and maintaining a SharePoint context within the web application.

The way Visual Studio works is that it, or addins, typically contain a local copy of the NuGet package so developers do not always have to be connected to the internet to download the NuGet packages. The package that the tools include has an ID of AppForSharePoint16WebToolkit.

When projects are committed to source control, typically the packages are not included as part of the commit because they can add a lot of extra storage space demands and unnecessarily increase the size of a package when sharing it with other developers. Therefore one of the first tasks developers do after getting a copy of the project from source control is to run NuGet package restore.

The challenge is that a package with the same ID does not exist in the NuGet package registry; there is no package with an ID of AppForSharePoint16WebToolkit. Instead the exact same package was added to the NuGet package registry as AppForSharePointWebToolkit (http://www.nuget.org/packages/AppForSharePointWebToolkit)** (notice the lack of the ‘16’ in the ID).

13.2 Preparing a SharePoint Provider-Hosted Add-in Project for Source Control / Distribution

Until the Office Developer Tools for Visual Studio 2013 are updated to fix this issue, it is recommended to alter the project prior to committing to your source control system, regardless if you are using Team Foundation Server, Visual Studio Online, Git or any other solution.

After creating the project, look within the project’s packages.config file and search for a package with an ID of AppForSharePoint16WebToolkit. The safest way to update the project is to uninstall & then reinstall the package.

Open the Package Manager Console in Visual Studio and enter the following to uninstall the package:

powershell PM> Uninstall-Package -Id AppForSharePoint16WebToolkit

If the uninstall throws an error about not finding the package, simply remove the package reference from the packages.config file manually & save your changes.

Now, install the public version of the same NuGet package from the public registry:

powershell PM> Install-Package -Id AppForSharePointWebToolkit


13.2.2 Applies to

  • Office 365 Multi Tenant (MT)
  • Office 365 Dedicated (D)
  • SharePoint 2013 on-premises

13.2.3 Author

Andrew Connell - @andrewconnell

13.2.4 Version history

Version Date Comments
0.1 December 31, 2014 First draft

14 Connect SharePoint app parts by using SignalR

Implement real-time communication between SharePoint app parts by using SignalR.

Applies to: add-ins for SharePoint | SharePoint 2013 | SharePoint Online

The Core.ConnectedAppParts sample shows you how to use a provider-hosted app as a message broker or chat hub to send and receive messages from all app parts connected to the chat hub. Use this solution if you are converting your SharePoint web parts to app parts, and need your app parts to communicate with each other.

14.1 Before you begin

<a name=“sectionSection0”> </a>

To get started, download the Core.ConnectedAppParts sample app from the Office 365 Developer patterns and practices project on GitHub.

14.2 Connected app parts and chat hub architecture

<a name=“sectionSection1”> </a>

Figure 1 shows the connected app parts and chat hub architecture.

Figure 1. Connected app parts and chat hub architecture

Illustration showing the architecture of the Core.ConnectedAppParts code sample

The connected app parts and chat hub architecture includes the following components:

  1. SharePoint pages that include app parts. The app parts use the SignalR jQuery library. The app parts contain JavaScript code, which send and receive messages from the chat hub running in the provider-hosted add-in. Each app part must first connect to the chat hub. After connecting to the chat hub, app parts can send and receive messages from other connected app parts.

  2. A SignalR Hub Proxy, which establishes a socket connection to the chat hub. The SignalR Hub Proxy brokers messages between the app part’s JavaScript code and the chat hub’s C# code.

  3. The chat hub, which uses the SignalR library to route messages from sending to receiving app parts. In this code sample, all app parts receive messages from the chat hub, including the app part that sent the message.

Note Because app parts run in an IFRAME, you cannot use JavaScript only to communicate between app parts.

14.3 Use the Core.ConnectedAppParts app

<a name=“sectionSection2”> </a>

To see a demo of two app parts communicating by using SignalR:

  1. When you run the app and the start page is displayed, choose Back to Site.

  2. Choose Settings > Add a page.

  3. In New page name, enter ConnectedAppParts, and then choose Create.

  4. Choose Insert > App Part.

  5. Choose Connected Part - One > Add.

  6. Choose Insert > App Part.

  7. Choose Connected Part - Two > Add.

  8. Choose Save.

  9. In Connected Part - One, enter Hello World from App Part 1, and then choose Send.

  10. Verify that the message Hello World from App Part 1 appears in both Connected Part - One and Connected Part - Two app parts.

In this code sample, the Core.ConnectedAppParts project contains two app parts (ConnectedPartOne and ConnectedPartTwo) that are deployed to the host web. ConnectedPartOne and ConnectedPartTwo run in an IFRAME. The web page contents for ConnectedPartOne and ConnectedPartTwo are defined in the Core.ConnectedAppPartsWeb project in Pages\ConnectedPartOne.aspx and Pages\ConnectedPartTwo.aspx. Both pages run in the provider-hosted app with the chat hub (ChatHub.cs) and use inline JavaScript to:

  1. Include the SignalR jQuery library.

  2. Connect to the SignalR Hub Proxy using connection.chatHub.

  3. Use chat.client.broadcastMessage to define a function to receive broadcasted messages sent by the chat hub. In this code sample, the name of the app part and the message being broadcasted is displayed in the discussion list.

  4. Start the connection to the chat hub using $.connection.hub.start().done. When the connection is established, an event handler is defined on the sendmessage button’s click event. This event handler calls chat.server.send to send the name of the app part and the message entered by the user to the chat hub.

Note The code in this article is provided as-is, without warranty of any kind, either express or implied, including any implied warranties of fitness for a particular purpose, merchantability, or non-infringement.

    <!--Script references. -->
    <!--Reference the jQuery library. -->
    <script src="../Scripts/jquery-1.6.4.min.js" ></script>
    <!--Reference the SignalR library. -->
    <script src="../Scripts/jquery.signalR-2.0.3.min.js"></script>
    <!--Reference the autogenerated SignalR hub script. -->
    <script src="../signalr/hubs"></script>
    <!--Add script to update the page and send messages.--> 
    <script type="text/javascript">
        $(function () {
            // Declare a proxy to reference the hub. 
            var chat = $.connection.chatHub;
            // Create a function that the hub can call to broadcast messages.
            chat.client.broadcastMessage = function (name, message) {
                // Html encode display name and message. 
                var encodedName = $('<div />').text(name).html();
                var encodedMsg = $('<div />').text(message).html();
                // Add the message to the page. 
                $('#discussion').append('<li><strong>' + encodedName
                    + '</strong>:&amp;nbsp;&amp;nbsp;' + encodedMsg + '</li>');
            };
            // Set initial focus to message input box.  
            $('#message').focus();
            // Start the connection.
            $.connection.hub.start().done(function () {
                $('#sendmessage').click(function () {
                    // Call the Send method on the hub. 
                    chat.server.send($('#displayname').val(), $('#message').val());
                    // Clear text box and reset focus for next comment. 
                    $('#message').val('').focus();
                });
            });
        });
    </script>

When the inline JavaScript code in ConnectedPartOne.aspx runs chat.server.send, a call is made to the Send method in ChatHub.cs. The Send method in ChatHub.cs receives the broadcasting app part’s name and the message, and then broadcasts the information to all connected app parts by using Clients.All.broadcastMessage. Clients.All.broadcastMessage calls the JavaScript function (in all connected app parts) that was defined by using chat.client.broadcastMessage.

 public void Send(string name, string message)
        {
            // Call the broadcastMessage method to update the app parts.
            Clients.All.broadcastMessage(name, message);
        }

Important In this code sample, all app parts connected to the chat hub receive all messages sent through the chat hub. Consider filtering messages based on session ID to determine which app parts should receive which messages.

14.4 Additional resources

<a name=“bk_addresources”> </a>

15 Corporate event app integration with SharePoint

Integrate add-ins for SharePoint into your business operations by using a provider-hosted add-in that can implement multiple complex business tasks.

Applies to: Office 365 | SharePoint 2013 | SharePoint Online

The BusinessApps.CorporateEventApp sample shows you how to implement a centralized corporate events management system as a provider-hosted add-in that integrates with your existing line-of-business (LOB) applications.

More specifically, the BusinessApps.CorporateEventApp sample shows you how to implement an ASP.NET web application that interacts with SharePoint as a data store for LOB entities. It also shows you how to implement multiple steps in a complex business task with a single provider-hosted add-in.
This sample app implements a centralized management system that consists of SharePoint entities (lists and content types). For each new content type, it creates corresponding LOB entities in an ASP.NET web application. Components of the web application run as remotely hosted add-in parts within the SharePoint interface and also as pages running entirely on the remote web host. The add-in overrides the default welcome page for your SharePoint site so that it can present a custom-branded interface on the site home page.

15.1 Using the BusinessApps.CorporateEventApp sample

When you start the BusinessApps.CorporateEventApp sample app, the Home page provides an option for you to configure the sample. It also points you to a number of resources for more information.

When you choose Start configuration, you go to the Configuration page, as shown in Figure 1. When you choose Initialize the data store on the Configuration page, the sample deploys the SharePoint entities and sample data that support the sample.

Figure 1. Configuration page

Screenshot that shows the initialize data screen

After you initialize the data store, you can go back to your site to see a new welcome page (the EventsHome.aspx page), which is populated by two web parts that the add-in deployed, as shown in Figure 2. In the left column, you’ll see the four new lists installed by the app, The Corporate Events list is populated by sample data.

Figure 2. Welcome page with web parts initialized

Screenshot that shows the add-in Start page with web parts deployed

Each web part contains links to each of the displayed events, where you can see the event details. When you choose a link, the event details page runs separately on the remote host, as shown in Figure 3. You can choose Back to Site on the page to return to the SharePoint site, and also to register yourself for the event.

Figure 3. Event details page

Screenshot that shows the add-in UI with corporate event screen showing event details

The registration page also runs separately on the remote host, and also contains a link back to the SharePoint host site (see Figure 4). When you finish registering for the event, your name will appear on the newly installed Event Registration list.

Figure 4. Event registration page

Screenshot that shows the app Corporate events event registration screen

The Models\DataInitializer.cs file contains the code that runs when you choose this button. The code in this file creates and deploys four new SharePoint lists, along with four corresponding content types:

  • Corporate events

  • Event registration

  • Event speakers

  • Event sessions

The code in this file uses a method similar to the one that is used in the Core.ModifyPages sample to add a custom page to the site.

            // Create default wiki page.
            web.AddWikiPage("Site Pages", "EventsHome.aspx");
AddWikiPage is an extension method from the Core.DevPnPCore project to add a new page to the site. This new page also becomes the new WelcomePage for the site. It also prepares to add the web parts to this page.
            var welcomePage = "SitePages/EventsHome.aspx";
            var serverRelativeUrl = UrlUtility.Combine(web.ServerRelativeUrl, welcomePage);

            File webPartPage = web.GetFileByServerRelativeUrl(serverRelativeUrl);

            if (webPartPage == null) {
                return;
            }

            web.Context.Load(webPartPage);
            web.Context.Load(webPartPage.ListItemAllFields);
            web.Context.Load(web.RootFolder);
            web.Context.ExecuteQuery();

            web.RootFolder.WelcomePage = welcomePage;
            web.RootFolder.Update();
            web.Context.ExecuteQuery();

The Models\DataInitializer.cs file also defines the XML for both web parts that are displayed on the new welcome page and then adds each one to the page. The following examples show how this works for the Featured Events web part.

Define web part XML

            var webPart1 = new WebPartEntity(){
                WebPartXml = @"<webParts>
  <webPart xmlns='http://schemas.microsoft.com/WebPart/v3'>
    <metaData>
      <type name='Microsoft.SharePoint.WebPartPages.ClientWebPart, Microsoft.SharePoint, Version=16.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c' />
      <importErrorMessage>Cannot import this Web Part.</importErrorMessage>
    </metaData>
    <data>
      <properties>
        <property name='Description' type='string'>Displays featured events</property>
        <property name='FeatureId' type='System.Guid, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'>3a6d7f41-2de8-4e69-b4b4-0325bd56b32c</property>
        <property name='Title' type='string'>Featured Events</property>
        <property name='ProductWebId' type='System.Guid, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'>12ae648f-27db-4a97-9c63-37155d3ace1e</property>
        <property name='WebPartName' type='string'>FeaturedEvents</property>
        <property name='ProductId' type='System.Guid, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'>3a6d7f41-2de8-4e69-b4b4-0325bd56b32b</property>
        <property name='ChromeState' type='chromestate'>Normal</property>
      </properties>
    </data>
  </webPart>
</webParts>",
                WebPartIndex = 0,
                WebPartTitle = "Featured Events",
                WebPartZone = "Rich Content"
            };

Add the web parts to the page

            var limitedWebPartManager = webPartPage.GetLimitedWebPartManager(Microsoft.SharePoint.Client.WebParts.PersonalizationScope.Shared);
            web.Context.Load(limitedWebPartManager.WebParts);
            web.Context.ExecuteQuery();

            for (var i = 0; i < limitedWebPartManager.WebParts.Count; i++) {
                limitedWebPartManager.WebParts[i].DeleteWebPart();
            }
            web.Context.ExecuteQuery();

            var oWebPartDefinition1 = limitedWebPartManager.ImportWebPart(webPart1.WebPartXml);
            var oWebPartDefinition2 = limitedWebPartManager.ImportWebPart(webPart2.WebPartXml);
            var wpdNew1 = limitedWebPartManager.AddWebPart(oWebPartDefinition1.WebPart, webPart1.WebPartZone, webPart1.WebPartIndex);
            var wpdNew2 = limitedWebPartManager.AddWebPart(oWebPartDefinition2.WebPart, webPart2.WebPartZone, webPart2.WebPartIndex);
            web.Context.Load(wpdNew1);
            web.Context.Load(wpdNew2);
            web.Context.ExecuteQuery();

In the Models directory of your web project, you’ll notice that this MVC ASP.NET web application contains four class names that correspond to the lists and content types that the app installed:

  • Event.cs (Corporate Events)

  • Registration.cs (Event Registration)

  • Session.cs (Event Sessions)

  • Speaker.cs (Event Speakers)

These four classes and their corresponding SharePoint content types together make up the four LOB entities used in this add-in.

The DataInitializer.cs file adds sample data for the Corporate Events list by creating sample Event objects that correspond with the Corporate Events content type and which the app adds to the Corporate Events list. When you register for an event, the app creates a Registration object that corresponds with the Event Registration content type and that the app adds to the Event Registration list. The sample has not yet fully implemented the Session and Speaker objects, so the app currently doesn’t work with those objects.

The following table lists the properties need to be implemented by the classes that inherit from the BaseListItem abstract class.

Table 1. Methods to implement in classes inheriting from BaseListItem****

Member Description
ContentTypeName Gets the content type that is associated with the item. If null, the default library content type will be assigned to the item when you save it.
FieldInternalNames A list of field names that can be cached to improve performance when used for checking field data prior to save.
ListTitle Gets the title of the list (case sensitive).

The following table lists the methods that have to be implemented by the classes that inherit from the BaseListItem abstract class. These methods return parameters that should be set to blittable types so that they can be used on multiple platforms.

Table 2. Methods that return blittable types

Method Description
ReadProperties(ListItem) Reads properties from the ListItem object using the BaseGet and BaseGetEnum methods and assigns them to properties of the subclass.
SetProperties(ListItem) Sets properties on the ListItem object using the BaseSet and BaseSetTaxonomyField methods of the abstract class.

The following table lists the helper methods from the BaseListItem class that the subclasses need to implement the ReadProperties and SetProperties methods.

Table 3. BaseListItem helper methods

Helper method Description
BaseGet(ListItem item, string internalName) Gets the property defined by the internalName parameter from ListItem and returns them of generic type T.
BaseSet(ListItem item, string internalName, object value) Sets the ListItem property defined by the internalName parameter.
BaseSetTaxonomyField(ListItem item, string internalName, string label, Guid termId) Sets the ListItem taxonomy field defined by the internalName and termId parameters.
BaseGetEnum(ListItem item, string internalName, T defaultValue) Gets the value of the enum property defined by the internalName parameter. Returns the value of the defaultValue parameter if the property is not set.

The Event.cs file contains the following implementations of the ReadProperties and SetProperties methods.

ReadProperties

        protected override void ReadProperties(ListItem item) {
            RegisteredEventId = BaseGet<string>(item, FIELD_REGISTERED_EVENT_ID);
            Description = BaseGet<string>(item, FIELD_DESCRIPTION);
            Category = BaseGet<string>(item, FIELD_CATEGORY);
            EventDate = BaseGet<DateTime?>(item, FIELD_DATE);
            Location = BaseGet<string>(item, FIELD_LOCATION);
            ContactEmail = BaseGet<string>(item, FIELD_CONTACT_EMAIL);
            Status = BaseGetEnum<EventStatus>(item, FIELD_STATUS);
            var imageUrl = BaseGet<FieldUrlValue>(item, FIELD_IMAGE_URL);

            if (imageUrl != null)
                ImageUrl = imageUrl.Url;
        }
SetProperties:
        protected override void SetProperties(ListItem item) {
            BaseSet(item, FIELD_REGISTERED_EVENT_ID, RegisteredEventId);
            BaseSet(item, FIELD_DESCRIPTION, Description);
            BaseSet(item, FIELD_CATEGORY, Category);
            BaseSet(item, FIELD_DATE, EventDate);
            BaseSet(item, FIELD_LOCATION, Location);
            BaseSet(item, FIELD_CONTACT_EMAIL, ContactEmail);
            BaseSet(item, FIELD_STATUS, Status.ToEnumDescription());
            BaseSet(item, FIELD_IMAGE_URL, ImageUrl);
        }

The following code examples show how the underlying BaseGet and BaseSet methods are defined in BaseListItem.cs.

BaseGet

protected T BaseGet<T>(ListItem item, string internalName){
            var field = _fields[internalName.ToLowerInvariant()];
            var value = item[field.InternalName];
            return (T)value;
        }

BaseSet

protected void BaseSet(ListItem item, string internalName, object value) {
            if (_fields.ContainsKey(internalName)) {
                var field = _fields[internalName.ToLowerInvariant()];

                if (field is FieldUrl &amp;&amp; value is string) {
                    var urlValue = new FieldUrlValue() {
                        Url = value.ToString()
                    };
                    value = urlValue;
                }
            }
            item[internalName] = value;
        }

The BaseListItem class also contains a Save method that is used to save each LOB entity that the app creates and manipulates. This method loads the list and determines whether the current item has an ID that is greater than 0. If the ID is not greater than 0, it assumes that it’s not valid and creates a new list item. It uses the SetProperties method to set properties on the ListItem and then sets the properties on the subclass by using the ReadProperties method.

public void Save(Web web) {
            var context = web.Context;
            var list = web.GetListByTitle(ListTitle);
            if (!IsNew &amp;&amp; Id > 0) {
                ListItem = list.GetItemById(Id);
            }
            else {
                var listItemCreationInfo = new ListItemCreationInformation();
                ListItem = list.AddItem(listItemCreationInfo);
            }

            // Ensure that the fields have been loaded.
            EnsureFieldsRetrieved(ListItem);

            // Set the properties on the list item.
            SetProperties(ListItem);
            BaseSet(ListItem, TITLE, Title);

            // Use if you want to override the created/modified date.
            //BaseSet(ListItem, CREATED, Created);
            //BaseSet(ListItem, MODIFIED, Modified);

            ListItem.Update();

            if (!string.IsNullOrEmpty(ContentTypeName)) {
                var contentType = list.GetContentTypeByName(ContentTypeName);
                if (contentType != null)
                    BaseSet(ListItem, "ContentTypeId", contentType.Id.StringValue);
            }

            ListItem.Update();

            // Execute the batch.
            context.ExecuteQuery();

            // Reload the properties.
            ListItem.RefreshLoad();
            UpdateBaseProperties(ListItem);
            ReadProperties(ListItem);
        }

15.2 Additional resources

<a name=“bk_addresources”> </a>

16 Customization of OneDrive for Business sites

16.0.1 Summary

OneDrive for Business sites can be customized in Office 365 or with app model in general, based on company requirements. Actual techniques to perform this customization are different than in the on-premises, since only app model techniques can be used. This page contains details on the actual patterns which can be used with app mdoel to customize OneDrive for Business sites.

17 Why would you customize OneDrive for Business sites?

There are numerous different aspects on applying customizations to OneDrive for Business (OD4B) sites. You certainly can customize these sites, since they are SharePoint sites, but at the same time you should always consider the short and long term impact of the customizations. As a rule of a thumb, we would like to give following high level guidelines for customizing OD4B sites.

  • Apply branding customizations using Office 365 themes or SharePoint site theming engine
  • If theme engines are not enough, you can adjust some CSS settings using alternate CSS option
  • Avoid customizing OD4B sites using custom master pages, since this will cause you additional long term costs and challenges with future updates
  • In most of the cases, you can achieve all common branding scenarios with themes and alternate CSS, so this is not really that limiting factor
  • If you chose to use custom master pages, be prepared on applying changes to the sites when major functional updates are applied to Office 365
  • You can use JavaScript embedding to modify or hide functionalities from the site
  • You can use CSOM to control for example language or regional settings in the OD4B sites (see new APIs)
  • We do not recommend usage of content types and site columns in OD4B sites to avoid challenges with the
  • Think OD4B sites as for personal un-structural data and documents. Team sites and collaboration sits are then for company data and documents where you can certainly use whatever information management policies and metadata you want.

As a summary, customizations are definitely supported in Office 365 and you can keep on using them with OD4B sites. We just truly want to ensure that you consider the short and long term impact of these customizations from operational and maintenance perspective. This is not really specific for SharePoint, rather a rule of thumb for any IT solution build with any platform.

Here’s an example of OD4B site, which has been customized using above guidelines. In this case the end result has been achieved with combination of Office 365 themes, site theme and usage of so called JavaScript embedding pattern.

A customized OD4B site.

18 Challenge with applying OneDrive for Business site customizations?

Let’s start with defining what is the challenge and what are we trying to solve here. Technically each OneDrive for Business site is currently using identical architecture as what the personal or my sites used back in SharePoint 2007 or 2010 version. This means that technically each OneDrive for Business site is their own site collection and we do not have any centralized location to apply branding or any other customizations.

Each OneDrive for Business site is its own site collection under the personal managed path, and the url is created based on the assigned user profile. In the image, three sites are listed as child sites. The URL of the first child site ends with /bill_contoso_com. The second ends with /vesa_contoso_com. The third ends with /john_contoso_com.

Classic solution to apply needed configuration to the OneDrive for Business sites (including my or personal sites) was based on feature stapling in farm level. This meant that you deployed farm solution to your SharePoint farm and used feature framework to associate your custom feature to be activated each time a my site is crated, which was then responsible of applying needed customizations. This similar approach does not work in Office 365, since it requires farm solution to be deployed and that is simply impossible with Office 365 sites. Therefore we need to look alternatives to apply the needed changes to the sites.

In Office 365 there is no centralized event raised, which we could attach our custom code to when OD4B site is created. This means that we need to think about alternative solutions, which is quite common with app model approaches. Do not get stuck on old models, think about how to achieve same end result using new APIs and technologies. From pure requirement perspective, it does not really matter how we apply the customizations to the sites, as long as they are applied, since business requirement is not to use feature stapling, it’s about applying needed customizations using whatever supported technical mechanism.

19 Different options for applying customizations

In practice we do have four different mechanisms to apply centralized customizations to OD4B sites in the Office 365. You could also consider manual option as the fifth one, but in the case of having hundreds or thousands of OD4B sites, using manual options is not really a realistic option. Here’s the different options we have.

  1. Office 365 suite level settings (Office 365 themes and other settings)
  2. Hidden app part with user context
  3. Pre-create and apply configuration
  4. Remote timer job based on user profile updates

Each of the options have advantages and disadvantages in them and the right option depends on your detailed business requirements. Some of the settings you can also apply from the Office 365 suite level, but often you would be looking for some more specifics, so actual customizations are needed. It obviously all comes down on exact requirements and business case analyses on their impact on short and long term.

19.1 Office 365 suite level settings

Office 365 is much more than just SharePoint, like you know. You can find more and more additional services which are not based on even the SharePoint architecture, like Delve, Yammer and many upcoming services. This means that the enterprise branding and configuration is not just about controlling what we have in the SharePoint sites, rather we should be thinking the overall end user experience and how we provide consistent configurations cross different services.

Classic example of these enterprise requirements is branding and for that we have already Office 365 theming introduced, which can be used to control some level of branding. We have also other upcoming features, which will help to control your site governance and other settings, from centralized location outside of the site collection settings, like the upcoming Compliance Center for Office 365, which is currently listed in the roadmap of the Office 365.

Following picture shows the different settings right now for the Office 365 theming, which will be then applied cross all Office 365 services.

Displays the Office 365 site, showing the custom theming tab page, entitled Manage custom themes for your organization, Customize Office 365 to reflect your oganization’s brand. Settings are available for Custom logo, URL for a clickable logo, Background image, Accent color, Navigation bar background color, Text and icons color, and App menu icon color.

Since by default Office 365 theme settings are for controlling OD4B site suite bar, you will most likely be using this options together with other options to ensure that you can provide at least the right branding elements cross your OD4B sites. Notice that when you change for example Office 365 theme settings in Office 365 admin tool, it does take a quite a long time to get the settings applied for OD4B sites, so be patience.

19.2 Hidden app part with user context

This is an approach where use centralized landing page as the location for starting the needed customization process. This means that you would have to have one centralized location, like company intranet front page, where the users are always landing when they open up their browser. This is pretty typical process with midsized and larger enterprises where corporate landing page is then controlled using group policy settings in the AD. This will ensure that end users cannot override default welcome page of the company domain joined browsers.

When user arrives to the intranet, we will have hidden app part in the page, which will start the customization process. It can actually be responsible of the whole OD4B site creation as well, since normally user would have to visit the OD4B site once time, before the site creation process will be started. Hidden app part is actually hosting a page from provider hosted add-in hosted in Azure. This page is then responsible of starting the customization process.

Let’s have a closer look on the logical design of this approach.

Diagram to show relationships. The App part on the SharePoint site uses instantiate to go to Provider Hosted Apps. Provider Hosted Apps uses Add Message to go to Storage Queue. Storage Queue uses instantiate to go to WebJob. WebJob uses Apply modifications to go to the OD4B site.

  1. Place hidden app part to centralized site where end users will land. Typically this is the corporate intranet front page.
  2. App part is hosting a page from provider hosted add-in, where in the server side code we initiate the customization process by adding needed metadata to the azure storage queue. This means that this page will only receive the customization request, but will not actually apply any changes to keep the processing time normal.
  3. This is the actual azure storage queue, which will receive the messages to queue for processing. This way we can handle the customization controlling process asynchronously so that it does not really matter how long end user will stay on the front page of the Intranet. If the customization process would be synchronous, we would be dependent on end user to keep the browser open in the Intranet front page until page execution is finalized. This would not definitely be optimal end user experience.
  4. WebJob hooked to follow the storage queue, which is called when new item is placed to the storage queue. This WebJob will receive the needed parameters and metadata from the queued message to access right site collection. WebJob is using app only token and have been granted the needed permissions to manipulate site collections in the tenant level.
  5. Actual customizations are applied one-by-one to those people’s sites who visit the intranet front page to start the process.

This is definitely the most reliable process of ensuring that there’s right configurations in the OD4B sites. You can easily add customization versioning logic to the process, which is also applying any needed updates to the OD4B sites, when there is an update needed and user visits the Intranet front page next time. This option does however require that you have that centralized location where your end users are landing.

If you are familiar of classic SharePoint development models with farm solutions, this is pretty similar process as one time executing timer jobs.

19.3 Pre-create and apply configuration

This option relies on the pre-creation of the OD4B sites before users will access them. This can be achieved by using relatively new API which provides us away to create OD4B sites for specific users in batch process, using either CSOM or REST. Needed code can be initiated using a PowerShell script or by writing actual code which is calling the remote APIs.

An administrator uses, pre-create and customize, to create an OD4B site.

  1. Administrator is using the remote creation APIs to create OD4B sites for users and is applying the needed customizations to the OD4B sites as part of the script process.
  2. Actual OD4B sites are created to the Office 365 for specific users and associated to their user profiles

In some sense this is also really reliable process, but you would have to manage new persons and updates “manually”, which could mean more work then using the hidden app part approach. This is definitely valid approach which can be taken and especially useful if you are migrating from some other file sharing solution to the OD4B and want to avoid the need of end users to access the OD4B site once, before actuals site creation is started.

19.4 Remote timer job based on user profile updates

This approach means scanning through user profiles for checking to whom the OD4B site has been created and then apply the changes to the sites as needed. This would mean scheduled job running outside of the SharePoint, which will periodically check the status and perform needed customizations. Scheduled job could be running as a WebJob in Azure or as simple as PowerShell script scheduled in your own windows scheduler. Obviously the scale of the deployment has huge impact on the chosen scheduling option.

A Remote timer job uses, Loop through site collections, to customize each site.

1.Scheduled task is initiated which will access user profiles of the users for checking who has OD4B site provisioned
2.Actual sites are customized one-by-one based on the business requirements

One of the key downsides of this option is that there can clearly be a situations where user can access the OD4B sites before the customizations have been applied. At the same time this option is interesting add-on for other options to ensure that end users have not changed any of the required settings on the sites or to check that the OD4B site content aligns with the company policies.


19.4.3 Applies to

  • Office 365 Multi Tenant (MT)
  • Office 365 Dedicated (D) - partly
  • SharePoint 2013 on-premises - partly

Patterns for Dedicated and on-premises are identical with add-in model techniques, but there are differences on the possible technologies which can be used.

19.4.4 Author

Vesa Juvonen (Microsoft) - @vesajuvonen

19.4.5 Version history

Version Date Comments
1.0 January 2nd, 2015 Initial release

20 Customize a SharePoint page by using remote provisioning and CSS

Use CSS to customize SharePoint rich text fields and Web Part Zones.

Applies to: Office 365 | SharePoint 2013 | SharePoint Online

You can use cascading style sheets (CSS) to customize SharePoint rich text fields and Web Part Zones. To customize rich text fields, you can do this right in the page you’re editing. For Web Part Zones, you can use the Script Editor Web Part to add HTML or scripts, or associate a CSS style sheet.
For a code sample that is associated with this article, see Branding.AlternateCSSAndSiteLogo in Office 365 Developer Patterns and Practices on GitHub.

20.1 Customize rich text fields

<a name=“sectionSection0”> </a>

You can customize rich text fields by using CSS right in the page editor:

  1. In your SharePoint page, choose Edit to open the page editor.

  2. From the ribbon, choose Insert > Embed Code.

You can now add or modify CSS elements for a rich text field.

20.2 Customize Web Part Zones

<a name=“sectionSection1”> </a>

To customize Web Part Zones by using CSS, you use the Script Editor Web Part. For more information, see How to Use the Script Editor Web Part in SharePoint 2013.

Note If you are using SharePoint Online and the NoScript feature, the Script Editor Web Part is disabled.

The following code example uploads custom CSS to the Asset Library, applies a reference to the CSS URL with a custom action, and then creates a custom action to build a link to the new CSS file.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.IO;
using Microsoft.SharePoint.Client;
using Microsoft.SharePoint.Client.EventReceivers;

namespace AlternateCSSAppAutohostedWeb.Services
{
    public class AppEventReceiver : IRemoteEventService
    {
        public SPRemoteEventResult ProcessEvent(SPRemoteEventProperties properties)
        {
            SPRemoteEventResult result = new SPRemoteEventResult();

            try
            {
                using (ClientContext clientContext = TokenHelper.CreateAppEventClientContext(properties, false))
                {
                    if (clientContext != null)
                    {
                        Web hostWebObj = clientContext.Web;

                        List assetLibrary = hostWebObj.Lists.GetByTitle("Site Assets");
                        clientContext.Load(assetLibrary, l => l.RootFolder);

                        // First, upload the CSS files to the Asset Library.
                        DirectoryInfo themeDir = new DirectoryInfo(System.Web.Hosting.HostingEnvironment.ApplicationPhysicalPath + "CSS");
                        foreach (var themeFile in themeDir.EnumerateFiles())
                        {
                            FileCreationInformation newFile = new FileCreationInformation();
                            newFile.Content = System.IO.File.ReadAllBytes(themeFile.FullName);
                            newFile.Url = themeFile.Name;
                            newFile.Overwrite = true;

                            Microsoft.SharePoint.Client.File uploadAsset = assetLibrary.RootFolder.Files.Add(newFile);
                            clientContext.Load(uploadAsset);
                            break;
                        }
                        
                        string actionName = "SampleCSSLink";

                        // Now, apply a reference to the CSS URL via a custom action.
                        
                        // Clean up existing actions that we may have deployed.
                        var existingActions = hostWebObj.UserCustomActions;
                        clientContext.Load(existingActions);

                        // Run uploads and initialize the existing Actions collection.
                        clientContext.ExecuteQuery();

                        // Clean up existing actions.
                        foreach (var existingAction in existingActions)
                        {
                            if(existingAction.Name.Equals(actionName, StringComparison.InvariantCultureIgnoreCase))
                                existingAction.DeleteObject();
                        }
                        clientContext.ExecuteQuery();

                        // Build a custom action to write a link to your new CSS file.
                        UserCustomAction cssAction = hostWebObj.UserCustomActions.Add();
                        cssAction.Location = "ScriptLink";
                        cssAction.Sequence = 100;
                        cssAction.ScriptBlock = @"document.write('<link rel=""stylesheet"" href=""" + assetLibrary.RootFolder.ServerRelativeUrl + @"/cs-style.css"" />');";
                        cssAction.Name = actionName;
                        
                        // Apply.
                        cssAction.Update();
                        clientContext.ExecuteQuery();
                    }
                    result.Status = SPRemoteEventServiceStatus.Continue;
                    return result;
                }
            }
            catch (Exception ex)
            {
                result.Status = SPRemoteEventServiceStatus.CancelWithError;
                result.ErrorMessage = ex.Message;
                return result;
            }
            
        }

        public void ProcessOneWayEvent(SPRemoteEventProperties properties)
        {
            // This method is not used by app events.
        }
    }
}

20.3 Additional resources

<a name=“bk_addresources”> </a>

21 Customize your SharePoint site UI by using JavaScript

You can update your SharePoint site’s UI by using JavaScript.

Applies to: Office 365 | SharePoint 2013 | SharePoint Online

The Core.EmbedJavaScript sample add-in adds a status bar message to all pages on a SharePoint site, and removes the new subsite link from the Site Contents page by using JavaScript.

Use this solution if you want to apply UI updates to your SharePoint site by using JavaScript (sometimes referred to as the Embed JavaScript technique) instead of creating custom master pages.

21.1 Before you begin

<a name=“sectionSection0”> </a>

To get started, download the Core.EmbedJavaScript sample add-in from the Office 365 Developer patterns and practices project on GitHub.

21.2 Using the Core.EmbedJavaScript app

<a name=“sectionSection1”> </a>

When you run this code sample, a provider-hosted add-in appears, as shown in Figure 1.

Figure 1. Screen shot of Core.EmbedJavaScript add-in start page

Screenshot of the Start page of the Embed JavaScript sample

Choosing Embed customization customizes the SharePoint site by:

  • Creating a status bar message on all pages in the SharePoint site, as shown in Figure 2.

  • Removing the new subsite link from Site Contents as shown in Figure 3.

Figure 2. Screen shot of status bar added to all pages

Status bar added to all SharePoint site pages

Figure 3. Screen shot of new subsite link removed from the Site Contents page

The new subsite link on Site Contents is removed.

In Figure 1, choosing Embed customization calls btnSubmit_Click in default.aspx. btnSubmit_Click calls AddJsLink, which does the following:

  1. Creates a string representing a script block definition. This script block definition points to a JavaScript file (scenario1.js) which is included on all pages on the SharePoint site.

  2. Uses UserCustomActions to get all user custom actions defined on the SharePoint site. Any existing reference to a JavaScript file called scenario1.js is removed.

  3. Creates a new custom action, and assigns the script block definition created in step 1 to the new custom action.

  4. Adds the new custom action to the website.

All pages on your SharePoint site will now run scenario1.js and display the UI customizations shown in Figure 2 and Figure 3.

Note The code in this article is provided as-is, without warranty of any kind, either express or implied, including any implied warranties of fitness for a particular purpose, merchantability, or non-infringement.

 public void AddJsLink(ClientContext ctx, Web web)
        {
            string scenarioUrl = String.Format("{0}://{1}:{2}/Scripts", this.Request.Url.Scheme, 
                                                this.Request.Url.DnsSafeHost, this.Request.Url.Port);
            string revision = Guid.NewGuid().ToString().Replace("-", "");
            string jsLink = string.Format("{0}/{1}?rev={2}", scenarioUrl, "scenario1.js", revision);

            StringBuilder scripts = new StringBuilder(@"
                var headID = document.getElementsByTagName('head')[0]; 
                var");

            scripts.AppendFormat(@"
                newScript = document.createElement('script');
                newScript.type = 'text/javascript';
                newScript.src = '{0}';
                headID.appendChild(newScript);", jsLink);
            string scriptBlock = scripts.ToString();

            var existingActions = web.UserCustomActions;
            ctx.Load(existingActions);
            ctx.ExecuteQuery();
            var actions = existingActions.ToArray();
            foreach (var action in actions)
            {
                if (action.Description == "scenario1" &amp;&amp;
                    action.Location == "ScriptLink")
                {
                    action.DeleteObject();
                    ctx.ExecuteQuery();
                }
            }

            var newAction = existingActions.Add();
            newAction.Description = "scenario1";
            newAction.Location = "ScriptLink";

            newAction.ScriptBlock = scriptBlock;
            newAction.Update();
            ctx.Load(web, s => s.UserCustomActions);
            ctx.ExecuteQuery();
        }

SharePoint uses Minimal Download Strategy (MDS) to reduce the amount of data the browser downloads when users navigate between pages on a SharePoint site. For more information, see Minimal Download Strategy overview. In scenario1.js, the following code ensures that whether or not your SharePoint site uses Minimal Download Strategy, RemoteManager_Inject always runs.

// Is MDS enabled?
if ("undefined" != typeof g_MinimalDownload &amp;&amp; g_MinimalDownload &amp;&amp; (window.location.pathname.toLowerCase()).endsWith("/_layouts/15/start.aspx") &amp;&amp; "undefined" != typeof asyncDeltaManager) {
    // Register script for MDS if possible
    RegisterModuleInit("scenario1.js", RemoteManager_Inject); //MDS registration
    RemoteManager_Inject(); //non MDS run
} else {
    RemoteManager_Inject();
}

RemoteManager_Inject performs the following tasks on your SharePoint site:

  • Creates a status bar on the host web. RemoteManager_Inject uses SP.SOD.executeOrDelayUntilScriptLoaded to ensure sp.js is loaded first, before calling SetStatusBar to add the status bar to the site. Because JavaScript files load asynchronously, using SP.SOD.executeOrDelayUntilScriptLoaded ensures your JavaScript file (sp.js) is loaded before your code calls a function defined in that JavaScript file.

  • Hides the new subsite link on the Site Contents page.

function RemoteManager_Inject() {

    loadScript(jQuery, function () {
        $(document).ready(function () {
            var message = "<img src='/_Layouts/Images/STS_ListItem_43216.gif' align='absmiddle'> <font color='#AA0000'>JavaScript customization is <i>fun</i>!</font>"

            // Execute status setter only after SP.JS has been loaded
            SP.SOD.executeOrDelayUntilScriptLoaded(function () { SetStatusBar(message); }, 'sp.js');

            // Customize the viewlsts.aspx page
            if (IsOnPage("viewlsts.aspx")) {
                //hide the subsites link on the viewlsts.aspx page
                $("#createnewsite").parent().hide();
            }
        });
    });
}

function SetStatusBar(message) {
    var strStatusID = SP.UI.Status.addStatus("Information : ", message, true);
    SP.UI.Status.setStatusPriColor(strStatusID, "yellow");
}

function IsOnPage(pageName) {
    if (window.location.href.toLowerCase().indexOf(pageName.toLowerCase()) > -1) {
        return true;
    } else {
        return false;
    }
}

21.3 Additional resources

<a name=“bk_addresources”> </a>

22 Data storage options in SharePoint Online

When you develop SharePoint Online add-ins, you have a number of different options for data storage. You can use the sample described in this article to explore the differences between each option, and to learn about the advantages to using remote data storage.

Applies to: Office 365 | SharePoint 2013 | SharePoint Online

This article describes the Core.DataStorageModels sample app, which shows you each of the following data storage options and the advantages and disadvantages of each:

  • SharePoint list on the host web

  • SharePoint list on the app web

  • SQL Azure database

  • Azure queue storage

  • Azure table storage

  • External web service

The Core.DataStorageModels sample app is a provider-hosted app written in C# and JavaScript that deploys a number of SharePoint artifacts (lists, app part, web part) to both the host web and the app web. It interacts with SharePoint lists on the app web and host web, and also makes calls to a SQL Azure database, an Azure queue and table storage, and a remote web service that implements OData. This sample uses the Model-View-Controller (MVC) pattern.
The Core.DataStorageModels sample app applies each data storage option to a specific function for which the option is well suited, as described in the following table.

Sample app storage option Used for
SharePoint list app web Customer notes
SharePoint list host web Support cases
Northwind OData service Customers
Azure table storage CSR ratings
Azure queue storage Call queue
SQL Azure Northwind database Orders, order details, products

The app implements a customer service dashboard and related interfaces that show recent orders, customer representative survey ratings, customer notes, support cases, and a customer representative call queue.
The first two scenarios let you retrieve data by using relatively simply client object model code or REST queries, but are limited by list query thresholds. The next four scenarios uses different types of remote storage.

Figure 1. Data storage models start page prompts you to deploy SharePoint components

Screenshot of app sample UI

22.1 Before you begin

<a name=“sectionSection0”> </a>

Before you use this sample, make sure that you have the following:

  • A Microsoft Azure account where you can deploy a SQL Azure database and create an Azure storage account.

  • A SharePoint developer site so that you can deploy the sample from Visual Studio 2013.

Also, you need to deploy the Northwind database to Microsoft Azure.

22.1.1 To deploy the Northwind database

  1. Log on to the Azure Management Portal and choose SQL Databases> Servers.

  2. Choose Create a SQL Database Server.

  3. In the Create Server form, enter values for Login Name, Login Password, and Region, as shown in Figure 2.

    Figure 2. SQL database server settings

    Shows the SQL database server settings

  4. Choose the checkmark button to finish and create the server.

  5. Now that you’ve created the database, choose the server name that you created, as shown in Figure 3.

    Figure 3. Server name on the Servers page

    Shows the list of SQL databases

  6. Choose CONFIGURE, and then choose the arrow in the lower right corner to complete the configuration, and choose SAVE.

  7. Open SQL Server Management Studio on your local development computer and create a new database named NorthWind.

  8. In the Object Explorer, select the Northwind database, and then choose New Query.

  9. In a text editor of your choice, open the northwind.sql SQL script that is provided with the Core.DataStorageModels sample.

  10. Copy the text in the northwind.sql file and paste it into the SQL Query window in the SQL Server Management Studio, and then choose Execute.

  11. In the Object Explorer, open the shortcut menu for (right-click) the Northwind database, select Tasks, and then select Deploy Database to SQL Azure.

  12. On the Introduction screen, choose Next.

  13. Choose Connect … and enter the Server name for the SQL Azure Database Server you just created.

  14. In the Authentication dropdown, select SQL Server Authentication.

  15. Enter the user name and password you used when you created the SQL Azure Database server, then choose Connect.

  16. Choose Next, and then choose Finish, and wait until the database is created. After it is created, choose Close to close the wizard.

  17. Return to the Azure Management Portal ( https://manage.windowsazure.com/) to verify that the Northwind database was created successfully. You should see it listed on the sql databases screen, as shown in Figure 4.

    Figure 4. Listing of SQL Server databases

    Shows a list of all SQL databases, including Northwind

  18. Select the Northwind database, and then select View SQL Database connection strings.

  19. Copy the connection string and paste it into a text file and save it locally. You will need this connection string later. Close the Connection Strings dialog box.

  20. Choose the Set up Windows Azure firewall rules for this IP address link and add your IP address to the firewall rules to allow you to access the database.

  21. Open the Core.DataStorageModels.sln project in Visual Studio 2013.

  22. In the Visual Studio Solution Explorer, locate the Web.config file.

  23. In the Web.config file, locate the add name="NorthWindEntities" element and replace the existing connectionString value with the connection string information that you saved locally in step 19.

      <add name="NorthWindEntities" connectionString="metadata=res://*/Northwind.csdl|res://*/Northwind.ssdl|res://*/Northwind.msl;provider=System.Data.SqlClient;provider connection string=&amp;quot;data source=<Your Server Here>.database.windows.net;initial catalog=NorthWind;user id=<Your Username Here>@<Your Server Here>;password=<Your Password Here>;MultipleActiveResultSets=True;App=EntityFramework&amp;quot;" providerName="System.Data.EntityClient" />
  24. Save the Web.config file.

22.2 SharePoint list on the app web (Notes scenario)

<a name=“sectionSection1”> </a>

The Notes list scenario, which uses a SharePoint list on an app web, shows how lists perform in a SharePoint app web. The Notes list is created in the app web with a title and description field. The SharePoint REST API queries the Notes list and returns all the notes based on a customer ID.

Using lists in the app web has one important advantage over other storage solutions: you can use simple SharePoint REST API calls to query data. However, there are some disadvantages:

  • To update list metadata, you must update and redeploy the app.

  • To update the data structure, you must rewrite application logic for storing and updating data.

  • Information stored in the list cannot be shared easily with other add-ins.

  • You cannot search for data in SharePoint.

  • The amount of data that you can store in lists and the size of query result sets are limited.

The code that underlies the Notes section of the customer dashboard uses REST queries to retrieve data from a list that is deployed to the app web. This list contains fields for titles, authors, customer IDs, and descriptions. You can use the app’s interface to add and retrieve notes for a specified customer, as shown in Figure 5.

Figure 5. User interface for the Notes app

A screenshot that shows the UI for the Notes data storage model

The View Notes List in App Web link provides an “out of the box” view of the list data.

This app uses the Model-View-Controller (MVC) pattern. You can see the code for the notes scenario in the Views/CustomerDashboard/Notes.cshtml file. It uses simple REST calls to add and retrieve data. The following code retrieves notes from the Notes list for a specified customer.

function getNotesAndShow() {
    var executor = new SP.RequestExecutor(appWebUrl);
    executor.executeAsync(
       {
           url: appWebUrl + "/_api/web/lists/getByTitle('Notes')/items/" +
                "?$select=FTCAM_Description,Modified,Title,Author/ID,Author/Title" +
                "&amp;$expand=Author/ID,Author/Title" +
                "&amp;$filter=(Title eq '" + customerID + "')",
           type: "GET",
           dataType: 'json',
           headers: { "accept": "application/json;odata=verbose" },
           success: function (data) {
               var value = JSON.parse(data.body);
               showNotes(value.d.results);
           },
           error: function (error) { console.log(JSON.stringify(error)) }
       }

    );
}

The following code adds a note for a given customer to the notes list.

function addNoteToList(note, customerID) {
    var executor = new SP.RequestExecutor(appWebUrl);
    var bodyProps = {
        '__metadata': { 'type': 'SP.Data.NotesListItem' },
        'Title': customerID,
        'FTCAM_Description': note
    };
    executor.executeAsync({
        url: appWebUrl + "/_api/SP.AppContextSite(@target)/web/lists/getbytitle('Notes')/items?@target='" + appWebUrl + "'",
        contentType: "application/json;odata=verbose",
        method: "POST",
        headers: {
            "accept": "application/json;odata=verbose",
            "content-type": "application/json;odata=verbose",
            "X-RequestDigest": $("#__REQUESTDIGEST").val()
        },
        body: JSON.stringify(bodyProps),
        success: getNotesAndShow,
        error: addNoteFailed
    });
}

You can add 5000 items to the list to show that list queries that generate a result set of 5000 or more items will hit the list query threshold and fail. You can also add so much data to your list on the app web that you exceed the storage limit for your site collection (which depends on how much storage space you’ve allocated to it). These scenarios show two of the most important limitations of this approach: list query size limits and storage space limits. If your business needs require you to work with large data sets and query result sets, this approach won’t work.

22.2.1 List query threshold

<a name=“bk_listquerythreshold”> </a>

To load enough data to exceed the list query threshold limit:

  1. In the left menu, choose Sample Home Page.

  2. In the List Query Thresholds section, choose Add list items to the Notes list in the App Web.

  3. Per the instructions that appear above the button, perform this operation 10 times.

    When the Notes list is updated, a message appears at the top of the page that indicates how many list items (Notes) you added and how many are left to add.

    Note The operation takes about one minute to run each time you choose the button. The end result of running the operation 10 times is shown in Figure 6.
  4. After you’ve added 5,001 items to the list, choose Notes in the left menu. When the page loads, you will see the error message shown in Figure 6, which comes from the SharePoint REST API.

    Figure 6. List query thresold exceeded error message

    A screenshot that shows an error message that states that the operation exceeded the list view threshol.

  5. Choose View Notes List in App Web and page through the list to see that it includes 500 rows. Note that although SharePoint list views can accommodate browsing of this many entries, the REST API fails due to the list query throttling threshold.

22.2.2 Data storage limit

<a name=“bk_listquerythreshold”> </a>

To load enough data to exceed the data storage limit:

  1. In the left menu, choose Sample Home Page.

  2. In the Data Threshold section, choose Fill the App Web Notes list with 1GB of data.

  3. Per the instructions that appear above the Fill the App Web Notes list with 1GB of data button, perform this operation 11 times.

    When the Notes list is updated, a message appears at the top of the page that indicates how many list items (Notes) you added and how many are left to add.

    Note The operation takes about one minute to run each time you choose the button. The end result of running the operation 11 times is shown in Figure 7.
  4. After you perform the operation 11 times, an error message will occur when you choose the button, as shown in Figure 7.

    Figure 7. Data storage threshold exceeded error message

    A screenshot that shows the error message that occurs when the data storage limit is exceeded

  5. After you exceed the data storage limit, choose the back button in the web browser, and then choose the Notes link in the left menu.

  6. Choose View Notes List in App Web.

    When the page loads, an error message appears at the top of the page that indicates that the site is out of storage space.

22.3 SharePoint list on the host web (Support Cases scenario)

<a name=“sectionSection2”> </a>

The Support Cases scenario displays data that is stored in a SharePoint list in the host web. This scenario uses two different patterns to access and interact with the data. The first pattern includes the SharePoint Search Service and the Content By Search Web Part with a custom Display Template applied. The second pattern includes an App Part (Client Web Part) that displays an MVC view, which uses the SP.RequestExecutor class to call the SharePoint REST API.

There are several advantages to using this approach:

  • You can query data easily using simple REST queries or client object model code.

  • You can search for data in SharePoint.

  • You can update the list metadata and create new views for a list without updating and redeploying the app. These changes won’t affect the behavior of your app.

  • Lists on the host web are not deleted when you uninstall your app, unless the app uses the AppUninstalled event to remove the list and/or delete the data.

Offsetting these advantages are the following disadvantages:

  • The host web limits both the amount of data you can store in lists and the size of the query results. If your business needs require storing and/or querying large data sets, this is not a recommended approach.

  • For complex queries, lists do not perform as well as databases.

  • For backing up and restoring data, lists do not perform as well as databases.

The data for this scenario is stored in a SharePoint list deployed to the host web. Data is retrieved and displayed by means of the following:

The code in this view uses REST queries to retrieve information from the list, while the content search web part uses the SharePoint search service to retrieve the data. The two approaches demonstrate the significant advantage of this option: you can use both the search service and the REST/CSOM APIs to retrieve information from a list on the host web.

When you select a customer from the support cases drop-down, you’ll see the support case data for that customer displayed in both the web part and the app part (Figure 8). The web part might not return content right away, because it can take up to 24 hours for the SharePoint search service to index the data. You can also choose the View Support Cases List in Host Web link to see a conventional view of the list data.

Figure 8. User interface for the support case scenario

A screenshot that shows the UI for interacting with the support case scenario

The content search web part deployed by this app uses a custom display template. Figure 9 shows where in the Assets directory of the web project you can find the web part and the associated template.

Figure 9. Contents of the Assets directory of the web project

Screenshot of the Assets directory

The following JavaScript code that you’ll find in the Views/SupportCaseAppPart\Index.cshtml file uses the cross-domain library to invoke a REST query on the SharePoint list on the host web.

function execCrossDomainRequest() {
var executor = new SP.RequestExecutor(appWebUrl);

executor.executeAsync(
   {
        url: appWebUrl + "/_api/SP.AppContextSite(@@target)" +
                "/web/lists/getbytitle('Support Cases')/items" +
              "?$filter=(FTCAM_CustomerID eq '" + customerID + "')" +
            "&amp;$top=30" +
                    "&amp;$select=Id,Title,FTCAM_Status,FTCAM_CSR" +
                    "&amp;@@target='" + hostWebUrl + "'",
method: "GET",
              headers: { "Accept": "application/json; odata=verbose" },
              success: successHandler,
              error: errorHandler
   }
);
}

You can add 5000 items to the list to show that list queries that generate a result set of 5000 or more items will hit the list query threshold and fail. This scenario shows one of the most important limitations of this approach: list query size limits. If your business needs require you to work with large data and query result sets, this approach won’t work. For more information, see List query threshold earlier in this article.

22.4 Northwind OData web service (Customer Dashboard scenario)

<a name=“sectionSection3”> </a>

The Customer Dashboard scenario uses JQuery AJAX to invoke the NorthWind OData service to return customer information. The app stores its data in a web service, then uses OData to retrieve it.

The following are the advantages to using this approach:

  • A given web service can support multiple add-ins.

  • You can update your web service without having to update and redeploy your app.

  • Your SharePoint and web service installations do not affect one another.

  • Hosting services such as Microsoft Azure enable you to scale your web services.

  • You can back up and restore information on your web services separately from your SharePoint site.

  • You don’t lose data when uninstalling your app, unless the app uses the AppUninstalled event to delete the data.

The customer dashboard scenario stores its data in a web service that implements the OData standard to retrieve data. In the customer dashboard interface, you select a customer from a drop-down menu, and customer information displays in the Customer Info pane.

This UI page is a Model-View-Controller view. The display is defined in the Views/CustomerDashboard\Home.cshtml file. The underlying code is in the Scripts/CustomerDashboard.js file. The JavaScript code uses AJAX to query the Northwind web service. Because this is an OData service, the web service query consists of query string arguments attached to a URL that points to a web service endpoint. The service returns customer information in JSON format.

The following code runs when you choose the Customer Dashboard link. It retrieves all the customer names and IDs in order to populate the drop-down menu.

var getCustomerIDsUrl = "https://odatasampleservices.azurewebsites.net/V3/Northwind/Northwind.svc/Customers?$format=json&amp;$select=CustomerID";
    $.get(getCustomerIDsUrl).done(getCustomerIDsDone)
        .error(function (jqXHR, textStatus, errorThrown) {
            $('#topErrorMessage').text('Can\'t get customers. An error occurred: ' + jqXHR.statusText);
        });

The following code runs when you select a customer name from the drop-down menu. It uses the OData $filter argument to specify the customer ID and other query string arguments to retrieve information related to this customer.

var url = "https://odatasampleservices.azurewebsites.net/V3/Northwind/Northwind.svc/Customers?$format=json" +  "&amp;$select=CustomerID,CompanyName,ContactName,ContactTitle,Address,City,Country,Phone,Fax" + "&amp;$filter=CustomerID eq '" + customerID + "'";

$.get(url).done(getCustomersDone)
   .error(function (jqXHR, textStatus, errorThrown) {
          alert('Can\'t get customer ' + customerID + '. An error occurred: ' + 
                 jqXHR.statusText);
});

22.5 Azure table storage (Customer Service Survey scenario)

<a name=“sectionSection4”> </a>

The Customer Service Survey scenario allows a customer service representative to see their rating based on customer surveys and uses Azure table storage and the Microsoft.WindowsAzure.Storage.Table.CloudTable API to store and interact with the data.

The following are the advantages to using this approach:

  • Azure storage tables support more than one app.

  • You can update Azure storage tables without having to update and redeploy your app.

  • Your SharePoint installation and Azure storage tables have no effect on each other’s performance.

  • Azure storage tables scale easily.

  • You can back up and restore your Azure storage tables separately from your SharePoint site.

  • You don’t lose data when you uninstall your app, unless the app uses the AppUninstalled event to delete the data.

The app’s interface displays the current user’s survey rating in the center page. If that Azure storage table is empty, the app will add some information to the table before it displays it.

The following code from the CSRInfoController.cs defines the Home method that retrieves the user’s nameId.

[SharePointContextFilter]
public ActionResult Home()
{
    var context = 
        SharePointContextProvider.Current.GetSharePointContext(HttpContext);
    var sharePointService = new SharePointService(context);
    var currentUser = sharePointService.GetCurrentUser();
    ViewBag.UserName = currentUser.Title;

    var surveyRatingsService = new SurveyRatingsService();
    ViewBag.Score = surveyRatingsService.GetUserScore(currentUser.UserId.NameId);

    return View();
}

The following code from the SurveyRatingService.cs file defines the SurveyRatingsService constructor, which sets up the connection to the Azure storage account.

public SurveyRatingsService(string storageConnectionStringConfigName = 
        "StorageConnectionString")
{
    var connectionString = Util.GetConfigSetting("StorageConnectionString");
    var storageAccount = CloudStorageAccount.Parse(connectionString);

    this.tableClient = storageAccount.CreateCloudTableClient();
    this.surveyRatingsTable = this.tableClient.GetTableReference("SurveyRatings");
    this.surveyRatingsTable.CreateIfNotExists();
}

The following code from the same file defines the GetUserScore method, which retrieves the user’s survey score from the Azure storage table.

public float GetUserScore(string userName)
{
    var query = new TableQuery<Models.Customer>()
    .Select(new List<string> { "Score" })
    .Where(TableQuery.GenerateFilterCondition("Name", 
    QueryComparisons.Equal, userName));

    var items = surveyRatingsTable
         .ExecuteQuery(query)
             .ToArray();

    if (items.Length == 0)           
    return AddSurveyRatings(userName);

    return (float)items.Average(c => c.Score);
}

If the table doesn’t contain any survey data related to the current user, the AddSurveyRating method randomly assigns a score for the user.

private float AddSurveyRatings(string userName)
{
    float sum = 0;
    int count = 4;
    var random = new Random();

    for (int i = 0; i < count; i++)
    {
    var score = random.Next(80, 100);
    var customer = new Models.Customer(Guid.NewGuid(), userName, score);

    var insertOperation = TableOperation.Insert(customer);
    surveyRatingsTable.Execute(insertOperation);

    sum += score;
    }
    return sum / count;
}

22.6 Azure queue storage (Customer Call Queue scenario)

<a name=“sectionSection5”> </a>

The Customer Call Queue scenario lists callers in the support queue and simulates taking calls. The scenario uses Azure storage queues to store data and the Microsoft.WindowsAzure.Storage.Queue.CloudQueue API with Model-View-Controller.

The following are the advantages to using this approach:

  • Azure storage queues support more than one app.

  • You can update Azure storage queues without having to update and redeploy your app.

  • Your SharePoint installation and Azure storage queues have no effect on each other’s performance.

  • Azure storage queues scale easily.

  • You can back up and restore your Azure storage queues separately from your SharePoint site.

  • You don’t lose data when you uninstall your app, unless the app uses the AppUninstalled event to delete the data.

The app’s interface displays a support call queue in the center pane when you choose the Call Queue link. You can simulate receiving calls (adding a call to the queue) by choosing Simulate Calls, and you can simulate taking the oldest call (removing a call from the queue) by choosing the Take Call action associated with a given call.

This page is a Model-View-Controller view that is defined in the Views\CallQueue\Home.cshmtl file. The Controllers\CallQueueController.cs file defines the CallQueueController class, which contains methods for retrieving all calls in the queue, adding a call to the queue (simulating a call), and removing a call from the queue (taking a call). Each of these methods calls methods defined in the Services\CallQueueService.cs file, which uses the Azure storage queue API to retrieve the underlying information in the storage queue.

public class CallQueueController : Controller
{
    public CallQueueService CallQueueService { get; private set; }

    public CallQueueController()
    {
        CallQueueService = new CallQueueService();
    }

    // GET: CallQueue
    public ActionResult Home(UInt16 displayCount = 10)
    {
        var calls = CallQueueService.PeekCalls(displayCount);
        ViewBag.DisplayCount = displayCount;
        ViewBag.TotalCallCount = CallQueueService.GetCallCount();
        return View(calls);
    }

    [HttpPost]
    public ActionResult SimulateCalls(string spHostUrl)
    {
        int count = CallQueueService.SimulateCalls();
        TempData["Message"] = string.Format("Successfully simulated {0} calls and added them to the call queue.", count);
        return RedirectToAction("Index", new { SPHostUrl = spHostUrl });
    }

    [HttpPost]
    public ActionResult TakeCall(string spHostUrl)
    {
        CallQueueService.DequeueCall();
        TempData["Message"] = "Call taken successfully and removed from the call queue!";
        return RedirectToAction("Index", new { SPHostUrl = spHostUrl });
    }
}

The CallQueueService.cs file defines the CallQueueService class, which establishes the connection to the Azure storage queue. That class also contains the methods for adding, removing (dequeuing), and retrieving the calls from the queue.

public class CallQueueService
{
    private CloudQueueClient queueClient;

    private CloudQueue queue;

    public CallQueueService(string storageConnectionStringConfigName = "StorageConnectionString")
    {
        var connectionString = CloudConfigurationManager.GetSetting(storageConnectionStringConfigName);
        var storageAccount = CloudStorageAccount.Parse(connectionString);

        this.queueClient = storageAccount.CreateCloudQueueClient();
        this.queue = queueClient.GetQueueReference("calls");
        this.queue.CreateIfNotExists();
        }

        public int? GetCallCount()
        {
        queue.FetchAttributes();
        return queue.ApproximateMessageCount;
    }

    public IEnumerable<Call> PeekCalls(UInt16 count)
    {
        var messages = queue.PeekMessages(count);

        var serializer = new JavaScriptSerializer();
        foreach (var message in messages)
        {
        Call call = null;
        try
        {
        call = serializer.Deserialize<Call>(message.AsString);
        }
        catch { }

        if (call != null) yield return call;
        }
    }

    public void AddCall(Call call)
    {
        var serializer = new JavaScriptSerializer();
        var content = serializer.Serialize(call);
        var message = new CloudQueueMessage(content);
        queue.AddMessage(message);
    }

    public void DequeueCall()
    {
        var message = queue.GetMessage();
        queue.DeleteMessage(message);
    }

    public int SimulateCalls()
    {
        Random random = new Random();
        int count = random.Next(1, 6);
        for (int i = 0; i < count; i++)
        {
        int phoneNumber = random.Next();
        var call = new Call
        {
        ReceivedDate = DateTime.Now,
        PhoneNumber = phoneNumber.ToString("+1-000-000-0000")
        };
        AddCall(call);

        return count;
    }
}

22.7 SQL Azure database (Recent Orders scenario)

<a name=“sectionSection6”> </a>

The Recent Orders scenario uses a direct call to the Northwind SQL Azure database to return all the orders for a given customer.

The following are the advantages to using this approach:

  • A database can support more than one app.

  • You can update your database schema without having to update and redeploy your app, as long as the schema changes don’t affect the queries in your app.

  • A relational database can support many-to-many relationships and thus support more complex business scenarios.

  • You can use database design tools to optimize the design of your database.

  • Relational databases provide better performance than the other options when you need to execute complex operations in your queries, such as calculations and joins.

  • A SQL Azure database allows you to import and export data easily, so it’s easier to manage and move your data.

  • You don’t lose any data when you uninstall your app, unless the app uses the AppUninstalled event to delete the data.

The recent orders interface works much like the customer dashboard interface. You choose on the Recent Orders link in the left column, and then choose a customer from the drop-down menu at the top of the center pane. A list of orders from that customer will appear in the center pane.

This page is a Model-View-Controller view defined in the Views\CustomerDashboard\Orders.cshtml file. Code in the Controllers\CustomerDashboardController.cs file uses the Entity Framework to query the Orders table in your SQL Azure database. The customer ID is passed by using a query string parameter in the URL that is passed when the user selects a customer from the drop-down menu. The query creates a join on the Customer, Employee, and Shipper tables. The query result is then passed to the Model-View-Controller view that displays the results.

The following code from the CustomerDashboardController.cs file performs the database query and returns the data to the view.

public ActionResult Orders(string customerId)
{            
    Order[] orders;
    using (var db = new NorthWindEntities())
    {
            orders = db.Orders
                  .Include(o => o.Customer)
                  .Include(o => o.Employee)
                  .Include(o => o.Shipper)
                  .Where(c => c.CustomerID == customerId)
                  .ToArray();
    }

    ViewBag.SharePointContext = 
        SharePointContextProvider.Current.GetSharePointContext(HttpContext);

    return View(orders);
}

22.8 Additional resources

<a name=“bk_addresources”> </a>

23 Document library templates sample add-in for SharePoint

As part of your Enterprise Content Management (ECM) strategy, you can implement a custom document library template, and customize site columns, site content types, taxonomy fields, version settings, and the default document content type.

Applies to: Office 365 | SharePoint 2013 | SharePoint Online

The ECM.DocumentLibraries sample shows you how to use a provider-hosted add-in to create a list or document library, assign a content type to it, and remove the default content type. Use this solution if you want to:

  • Create a list or document library and apply a default content type.

  • Assert greater control over the addition, maintenance, or implementation of localized versions of your custom fields.

  • Remove the default content type on a list or library.

  • Apply library configuration settings when you create a list or library.

23.1 Before you begin

<a name=“sectionSection0”> </a>

To get started, download the ECM.DocumentLibraries sample add-in from the Office 365 Developer patterns and practices project on GitHub.

Users accessing the ECM.DocumentLibraries add-in must have permissions to manage lists. The DoesUserHavePermission method in Default.aspx.cs checks the user’s permissions to ensure they can manage lists. If the user does not have permissions to manage lists, the add-in presents an error message to the user.

private bool DoesUserHavePermission()
        {
            var spContext = SharePointContextProvider.Current.GetSharePointContext(Context);
            using (var ctx = spContext.CreateUserClientContextForSPHost())
            {
                BasePermissions perms = new BasePermissions();
                perms.Set(PermissionKind.ManageLists);
                ClientResult<bool> _permResult = ctx.Web.DoesUserHavePermissions(perms);
                ctx.ExecuteQuery();
                return _permResult.Value;
            }
        }

23.2 Using the ECM.DocumentLibraries sample add-in

<a name=“sectionSection1”> </a>

When you start this add-in , the start page displays as shown in Figure 1. The ECM.DocumentLibraries start page looks like the page to add a new document library when you select Site Contents > add an app > Document Library > Advanced Options - with one difference. When you start the add-in , the Document Template dropdown list displays custom document library template, IT Document and Contoso Document. When the user chooses Create, the selected custom content type is assigned to the new document library.

Figure 1. Start page of the ECM.DocumentLibraries add-in

Screenshot that shows the ECM.DocumentLibraries add-in start page, with a Document Template drop-down box that lists IT Document as a choice.

When users choose Create, the CreateLibrary_Click method in Default.aspx.cs checks the selected default template and makes calls to either the CreateITDocumentLibrary or CreateContosoDocumentLibrary method in ContentTypeManager.cs, as shown in the following code.

Note The code in this article is provided as-is, without warranty of any kind, either express or implied, including any implied warranties of fitness for a particular purpose, merchantability, or non-infringement.

protected void CreateLibrary_Click(object sender, EventArgs e)
        {
            try
            {
                var _spContext = SharePointContextProvider.Current.GetSharePointContext(Context);
                var _templateSelectedItem = this.DocumentTemplateType.Value;
                var _libraryToCreate = this.GetLibraryToCreate();
                using (var _ctx = _spContext.CreateUserClientContextForSPHost())
                {
                    
                    _ctx.ApplicationName = "AMS ECM.DocumentLibraries";
                    ContentTypeManager _manager = new ContentTypeManager();
                    switch(_templateSelectedItem)
                    {
                        case "IT Document":
                            _manager.CreateITDocumentLibrary(_ctx, _libraryToCreate);
                            break;
                        case "Contoso Document":
                            _manager.CreateContosoDocumentLibrary(_ctx, _libraryToCreate);
                            break;
                    }
                 }

                Response.Redirect(this.Url.Value);
            }
            catch (Exception _ex)
            {
                throw;
            }
        }

The CreateContosoDocumentLibrary method then performs the following tasks, as shown in the next code example:

  1. Creates custom fields in the Managed Metadata Service.

  2. Creates a content type.

  3. Associates the custom fields with the content types.

  4. Creates the document library with the content type.

        public void CreateContosoDocumentLibrary(ClientContext ctx, Library library)
        {
            // Check the fields.
            if (!ctx.Web.FieldExistsById(FLD_CLASSIFICATION_ID)){
                ctx.Web.CreateTaxonomyField(FLD_CLASSIFICATION_ID, 
                                            FLD_CLASSIFICATION_INTERNAL_NAME, 
                                            FLD_CLASSIFICATION_DISPLAY_NAME, 
                                            FIELDS_GROUP_NAME, 
                                            TAXONOMY_GROUP, 
                                            TAXONOMY_TERMSET_CLASSIFICATION_NAME);
            }
            
            // Check the content type.
            if (!ctx.Web.ContentTypeExistsById(CONTOSODOCUMENT_CT_ID)){
                ctx.Web.CreateContentType(CONTOSODOCUMENT_CT_NAME, 
                                          CT_DESC, CONTOSODOCUMENT_CT_ID, 
                                          CT_GROUP);
            }

            // Associate fields to content types.
            if (!ctx.Web.FieldExistsByNameInContentType(CONTOSODOCUMENT_CT_NAME, FLD_CLASSIFICATION_INTERNAL_NAME)){
                ctx.Web.AddFieldToContentTypeById(CONTOSODOCUMENT_CT_ID, 
                                                  FLD_CLASSIFICATION_ID.ToString(), 
                                                  false);
            }
            CreateLibrary(ctx, library, CONTOSODOCUMENT_CT_ID);
          
        }

CreateContosoDocumentLibrary calls the CreateTaxonomyField method, which is part of the OfficeDevPnP.Core. CreateTaxonomyField creates a field in the managed metadata service from the provider-hosted add-in .

public static Field CreateTaxonomyField(this Web web, Guid id, string internalName, string displayName, string group, TermSet termSet, bool multiValue = false)
        {
            internalName.ValidateNotNullOrEmpty("internalName");
            displayName.ValidateNotNullOrEmpty("displayName");
            termSet.ValidateNotNullOrEmpty("termSet");

            try
            {
                var _field = web.CreateField(id, internalName, multiValue ? "TaxonomyFieldTypeMulti" : "TaxonomyFieldType", true, displayName, group, "ShowField=\"Term1033\"");

                WireUpTaxonomyField(web, _field, termSet, multiValue);
                _field.Update();

                web.Context.ExecuteQuery();

                return _field;
            }
            catch (Exception)
            {
                /// If there is an exception, the hidden field might be present.
                FieldCollection _fields = web.Fields;
                web.Context.Load(_fields, fc => fc.Include(f => f.Id, f => f.InternalName));
                web.Context.ExecuteQuery();
                var _hiddenField = id.ToString().Replace("-", "");

                var _field = _fields.FirstOrDefault(f => f.InternalName == _hiddenField);
                if (_field != null)
                {
                    _field.DeleteObject();
                    web.Context.ExecuteQuery();
                }
                throw;

            }
        }

CreateContosoDocumentLibrary calls the CreateContentType method which is part of OfficeDevPnP.Core. CreateContentType creates a new content type.

public static ContentType CreateContentType(this Web web, string name, string description, string id, string group, ContentType parentContentType = null)
        {
            LoggingUtility.Internal.TraceInformation((int)EventId.CreateContentType, CoreResources.FieldAndContentTypeExtensions_CreateContentType01, name, id);

            // Load the current collection of content types.
            ContentTypeCollection contentTypes = web.ContentTypes;
            web.Context.Load(contentTypes);
            web.Context.ExecuteQuery();
            ContentTypeCreationInformation newCt = new ContentTypeCreationInformation();

            // Set the properties for the content type.
            newCt.Name = name;
            newCt.Id = id;
            newCt.Description = description;
            newCt.Group = group;
            newCt.ParentContentType = parentContentType;
            ContentType myContentType = contentTypes.Add(newCt);
            web.Context.ExecuteQuery();

            // Return the content type object.
            return myContentType;
        }

CreateContosoDocumentLibrary calls the AddFieldToContentTypeById method, which is part of OfficeDevPnP.Core. AddFieldToContentTypeById associates a field with a content type.

public static void AddFieldToContentTypeById(this Web web, string contentTypeID, string fieldID, bool required = false, bool hidden = false)
        {
            // Get content type.
            ContentType ct = web.GetContentTypeById(contentTypeID);
            web.Context.Load(ct);
            web.Context.Load(ct.FieldLinks);
            web.Context.ExecuteQuery();

            // Get field.
            Field fld = web.Fields.GetById(new Guid(fieldID));

            // Add field association to content type.
            AddFieldToContentType(web, ct, fld, required, hidden);
        }

CreateContosoDocumentLibrary calls the CreateLibrary method in ContentTypeManager.cs to create the document library. The CreateLibrary method assigns library settings such as the document library’s description, document versioning, and associated content types.

private void CreateLibrary(ClientContext ctx, Library library, string associateContentTypeID)
        {
            if (!ctx.Web.ListExists(library.Title))
            {
                ctx.Web.AddList(ListTemplateType.DocumentLibrary, library.Title, false);
                List _list = ctx.Web.GetListByTitle(library.Title);
                if(!string.IsNullOrEmpty(library.Description)) {
                    _list.Description = library.Description;
                }

                if(library.VerisioningEnabled) {
                    _list.EnableVersioning = true;
                }

                _list.ContentTypesEnabled = true;
                _list.Update();
                ctx.Web.AddContentTypeToListById(library.Title, associateContentTypeID, true);
                // Remove the default Document Content Type.
                _list.RemoveContentTypeByName(ContentTypeManager.DEFAULT_DOCUMENT_CT_NAME);
                ctx.Web.Context.ExecuteQuery();
            }
            else
            {
                throw new Exception("A list, survey, discussion board, or document library with the specified title already exists in this Web site.  Please choose another title.");
            }
        }

CreateLibrary calls RemoveContentTypeByName in ListExtensions.cs, which is part of OfficeDevPnP.Core. RemoveContentTypeByName removes the default content type on the document library.

        public static void RemoveContentTypeByName(this List list, string contentTypeName)
        {
            if (string.IsNullOrEmpty(contentTypeName))
            {
                throw (contentTypeName == null)
                  ? new ArgumentNullException("contentTypeName")
                  : new ArgumentException(CoreResources.Exception_Message_EmptyString_Arg, "contentTypeName");
            }

            ContentTypeCollection _cts = list.ContentTypes;
            list.Context.Load(_cts);

            IEnumerable<ContentType> _results = list.Context.LoadQuery<ContentType>(_cts.Where(item => item.Name == contentTypeName));
            list.Context.ExecuteQuery();

            ContentType _ct = _results.FirstOrDefault();
            if (_ct != null)
            {
                _ct.DeleteObject();
                list.Update();
                list.Context.ExecuteQuery();
            }
        }

After you create the document library, go to the Library settings on your document library to review the name, description, document versioning setting, content type, and custom fields the add-in assigned to your document library.

Figure 2. Library settings applied by the add-in

Screenshot of a Document Library Setting page, with Name, Web Address, and Description fields highlighted.

23.3 Additional resources

<a name=“bk_addresources”> </a>

24 Embedding JavaScript into SharePoint

You can use namespaces to avoid conflicts between your JavaScript customizations and standard SharePoint JavaScript or JavaScript customizations deployed by other developers.

The OfficeDev/PnP samples and solutions often include JavaScript code. In order to make the techniques easy to understand, these samples are usually simple and do not use namespaces when embedding JavaScript code into SharePoint. It is important to ensure that you follow the simple steps outlined in this article when you incorporate PnP samples into your solutions.

24.1 Why using namespaces is important

<a name=“sectionSection0”> </a>

JavaScript is a loosely typed language. If you define a variable or function, and a variable or function with the same name already exists in the current context, the new value or implementation will replace the existing one.
As a result, when you embed JavaScript code into SharePoint, it is easy to override standard SharePoint JavaScript code or customizations deployed by other developers.
This can create conflicts that might be hard to identify and debug.

To avoid this, we recommend that you use custom namespaces for your JavaScript code.

24.2 How to use namespaces

<a name=“sectionSection1”> </a>

The following example shows a simple pattern used to organize JavaScript code in namespaces and classes.

var MySolution = MySolution || {};

MySolution.MyClass1 = (function () {
    // private members
    var privateVar1 = 1;
    var privateVar2 = 2;
    
    function privateFunction1(){
      return "";
    }
    
    return {
        // public interface
        myFunction1: function() {
          return privateVar1;
        },
        myFunction2: function(){
          return privateVar2;
        }
    };
})();

Functions defined in public interface can be invoked as:

MySolution.MyClass1.myFunction1();

MySolution.MyClass1.myFunction2();

Because all your code uses the custom MySolution namespace, you can avoid any naming conflicts.

24.3 Namespaces and Minimal Download Strategy (MDS)

With the Minimal Download Strategy Feature enabled, Global Namespaces and Variables are cleared on MDS navigation.
To retain your Namespace, declare it as:

    Type.registerNamespace('MySolution');

The Type Namespace is specific to SharePoint, for a generic JavaScript library use:

if (window.hasOwnProperty('Type')) {
    Type.registerNamespace('MySolution');
} else {
    window.MySolution = window.MySolution || {};
}

24.3.0.1 Namespaces, MDS and CSR (Client Side Rendering)

The RegisterModuleInit function declares a proper Type Namespace.
Files attached with JSLink are not re-executed on MDS navigation, use the AsyncDeltaManager functions for that.

24.3.1 Resources:

25 Enterprise Content Management solutions for SharePoint 2013 and SharePoint Online

The Enterprise Content Management (ECM) solution pack includes code samples and documentation that you can use to transition your SharePoint Online and SharePoint 2013 ECM solutions from full-trust code to the add-in model.

Applies to: Office 365 | SharePoint 2013 | SharePoint Online

The samples in this solution pack show you how to use provider-hosted add-ins to perform common ECM tasks such as setting site policies, uploading files, or synchronizing term groups. The articles in this section will help you get started with and walk you through the primary scenarios that each sample illustrates.

25.1 In this section

Article Sample Shows you how to…
Document library templates ECM.DocumentLibraries Implement a custom document library template when creating a document library.
Autotagging ECM.Autotagging Automatically tag documents with metadata when documents are created or uploaded to SharePoint.
Information management Core.InformationManagement Get or set site policies to manage the site lifecycle (closure and deletion of sites after a period of time).
Records management extensions ECM.RecordsManagement Enable and change in-place records management settings on your sites and lists.
Taxonomy operations Core.MMS Create and read taxonomy data.
Bulk upload documents Core.BulkDocumentUploader Bulk upload documents to document libraries (including OneDrive for Business).
Upload large files Core.LargeFileUpload Use different methods to upload large files to a document library.
Synchronize term groups Core.MMSSync Synchronize term groups across multiple term stores.
Supporting % and # in file and folder with the ResourcePath API - Developer guidance on updated support for % and # in file and folders.

25.2 Additional resources

<a name=“bk_addresources”> </a>

26 Authorization considerations for tenants hosted in the Germany, China or US Government environments

When your Office 365 tenant is hosted in an specific environment like the Germany, China or US Government environments then you’ll need to take this in account when you’re developing against your tenant.

Applies to: Office 365 hosted in the Germany, China or US Government environments

26.1 Introduction

<a name=“introduction”> </a>

Microsoft has specific Office 365 deployments in Germany, China and for US Government to fulfill the specific regulations for those areas. Below links provide more context:

If you are a developer targeting applications for SharePoint Online hosted in these environments then you’ll need to take in account that these environments have their own dedicated Azure AD authentication endpoints that you as developer need to use. Below chapters explain how do use these dedicated endpoints for the typical SharePoint Online customization options.

26.2 Using Azure AD to authorize

<a name=“usingazureadtoauthorize”> </a>

26.2.1 Azure AD endpoints

<a name=“adendpoints”> </a>

When your Azure AD application needs to authorize it needs to use the correct endpoint. Below table describes the endpoints to use depending on where your Azure AD application has been defined:

Environment Endpoint
Production https://login.windows.net
Germany https://login.microsoftonline.de
China https://login.chinacloudapi.cn
US Government https://login-us.microsoftonline.com

26.2.2 Using PnP to authorize using Azure AD

<a name=“adpnp”> </a>

The PnP AuthenticationManager offers an easy way to obtain an SharePoint ClientContext object when you’re using an Azure AD application. The impacted methods have been extended with an optional AzureEnvironment enum

/// <summary>
/// Enum to identify the supported Office 365 hosting environments
/// </summary>
public enum AzureEnvironment
{
    Production=0,
    PPE=1,
    China=2,
    Germany=3,
    USGovernment=4
}

Below snippet shows an app-only authorization, notice the last parameter in the GetAzureADAppOnlyAuthenticatedContext method:

string siteUrl = "https://contoso.sharepoint.de/sites/test";
string aadAppId = "079d8797-cebc-4cda-a3e0-xxxx"; 
string pfxPassword = "my password";
ClientContext cc = new AuthenticationManager().GetAzureADAppOnlyAuthenticatedContext(siteUrl, 
            aadAppId, "contoso.onmicrosoft.de", @"C:\contoso.pfx", pfxPassword, AzureEnvironment.Germany);

Another snippet is showing an interactive user login using the GetAzureADNativeApplicationAuthenticatedContext method:

string siteUrl = "https://contoso.sharepoint.de/sites/test";
string aadAppId = "ff76a9f4-430b-4ee4-8602-xxxx"; 
ClientContext cc = new AuthenticationManager().GetAzureADNativeApplicationAuthenticatedContext(siteUrl, 
            aadAppId, "https://contoso.com/test", environment: AzureEnvironment.Germany);

26.3 Using Azure ACS to authorize your SharePoint add-in

<a name=“usingazureacs”> </a>

When you create SharePoint add-ins they’ll typically low-trust authorization which depends on Azure ACS as descrived in Creating SharePoint Add-ins that use low-trust authorization.

26.3.1 Azure ACS endpoints

<a name=“endpointsacs”> </a>

Environment Endpoint prefix Endpoint
Production accounts accesscontrol.windows.net
Germany login microsoftonline.de
China accounts accesscontrol.chinacloudapi.cn
US Government accounts accesscontrol.windows.net

Using this model the ACS endpoint url to use is formatted like https:// + endpoint prefix + / + endpoint. So the URL for production will be https://accounts.accesscontrol.windows.net, the one for Germany will be https://login.microsoftonline.de.

26.3.2 Updating tokenhelper.cs in your applications

<a name=“tokenhelperacs”> </a>

When you want to do SharePoint add-in authorization using Azure ACS then you’re using tokenhelper.cs (or tokenhelper.vb). The default tokenhelper class will have hardcoded references to the Azure ACS endpoints and methods to acquire the ACS endpoint as shown below:

...

private static string GlobalEndPointPrefix = "accounts";
private static string AcsHostUrl = "accesscontrol.windows.net";

...

26.3.2.1 Tokenhelper.cs updates for Germany

Update the static variables GlobalEndPointPrefix and AcsHostUrl to the Germany Azure ACS values.

...

private static string GlobalEndPointPrefix = "login";
private static string AcsHostUrl = "microsoftonline.de";

...

26.3.2.2 Tokenhelper.cs updates for China

Update the static variables GlobalEndPointPrefix and AcsHostUrl to the China Azure ACS values:

...

private static string GlobalEndPointPrefix = "accounts";
private static string AcsHostUrl = "accesscontrol.chinacloudapi.cn";

...

26.3.3 Using PnP to authorize your add-in using Azure ACS

<a name=“pnpacs”> </a>

The PnP AuthenticationManager offers an easy way to obtain an SharePoint ClientContext object when you’re using Azure ACS to authorize. The impacted methods have been extended with an optional AzureEnvironment enum

/// <summary>
/// Enum to identify the supported Office 365 hosting environments
/// </summary>
public enum AzureEnvironment
{
    Production=0,
    PPE=1,
    China=2,
    Germany=3,
    USGovernment=4
}

Below snippet shows an app-only authorization, notice the last parameter in the GetAppOnlyAuthenticatedContext method:

string siteUrl = "https://contoso.sharepoint.de/sites/test";
string acsAppId = "955c10f2-7072-47f8-8bc1-xxxxx"; 
string acsAppSecret = "jgTolmGXU9DW8hUKgletoxxxxx"; 
ClientContext cc = new AuthenticationManager().GetAppOnlyAuthenticatedContext(siteUrl, acsAppId, 
                acsAppSecret, AzureEnvironment.Germany);

26.3.4 Additional resources

<a name=“bk_addresources”> </a>

26.3.5 Summary

In this post I’ll talk about how you can build an Azure WebJob to act as a scheduled job for your Office 365 (or on-prem, should you like) SharePoint installation. With Office 365, if you’re running and utilizing the SharePoint Online service, you’ll need to re-think the way you run the things that used to be timer jobs in your traditional Farm-solutions. Follow along while we walk through the basic concepts of getting started with building custom jobs for Office 365 sites.

27 Introduction to Azure WebJob as a Timer Job for your Office 365 sites

In traditional SharePoint development we have Timer Jobs, which performs scheduled tasks in your SharePoint farms. A commonly used technique is to develop custom timer jobs in order to continuously or iteratively perform certain tasks in your environment.

With Office 365 and SharePoint Online, you don’t have the luxury to deploy your farm solutions, which is where your traditional timer jobs normally live. Instead, we have to find another way to schedule our tasks – this brings us to the concept of an Azure WebJob.

27.1 Steps for building the WebJob using Visual Studio 2015 (Preview)

In order to build a new WebJob from scratch, all we need to do is create a new console application and make sure we add the required assemblies to the project. In this sample I’ll use Visual Studio 2015 (preview), which as its name implies is currently in a beta release.

27.1.1 Step 1: Create your console application

Start by creating a new project and make sure you’ve selected the “Console Application” template. Also, and this is important, make sure you’ve chosen .NET Framework 4.5!

The New Project dialog box, set to create a console application using the dot net Framework 4.5

27.1.2 Step 2: Add the SharePoint-specific assemblies from NuGet

If you’re using Visual Studio 2015 as I’m doing, the NuGet package manager dialog will look slightly different from earlier versions of Visual Studio, but the concept’s the same.

  • Go to “Tools” -> “NuGet Package Manager” -> “Manage NuGet Packages for Solution…
  • Search for “App for SharePoint
  • Install the package called “AppForSharePointWebToolkit” which will
    install the required helper classes for working with the SharePoint
    Client Object Model.

The NuGet Package Manager dialog showing the search term, App for SharePoint. App For SharePoint Web Toolkit is highlighted and the Install button is ready to be clicked.
Make sure the NuGet package worked by making sure there’s these two new classes in your console application project:
The Solution Explorer shows the newly added classes, Share Point Context and Token Helper.

27.1.3 Step 3: Add the required code to execute the job on your Office 365 site

At this point we’ve created our Console Application and we’ve added the required assemblies that will make it easy for us to communicate with SharePoint. Next steps are to make use of these helper classes in order to execute commands in our SharePoint environment through our Console Application. Tag along.

Note: In the finished sample I’ll be using an account+password approach (like a service account). We’ll discuss authentication options further down in the article and check out links to other alternatives.

27.1.3.1 Wire up the calls to the SharePoint Online site collection

The following code demonstrates how to wire up the call to your site quite easily now that we’ve added the helper classes from our NuGet package.

 static void Main(string[] args)
  {
      using (ClientContext context = new ClientContext("https://redacted.sharepoint.com"))
      {
          // Use default authentication mode
          context.AuthenticationMode = ClientAuthenticationMode.Default;
          // Specify the credentials for the account that will execute the request
          context.Credentials = new SharePointOnlineCredentials(GetSPOAccountName(), GetSPOSecureStringPassword());

          // TODO: Add your logic here!
      }
  }
 
 
  private static SecureString GetSPOSecureStringPassword()
  {
      try
      {
          Console.WriteLine(" --> Entered GetSPOSecureStringPassword()");
          var secureString = new SecureString();
          foreach (char c in ConfigurationManager.AppSettings["SPOPassword"])
          {
              secureString.AppendChar(c);
          }
          Console.WriteLine(" --> Constructed the secure password");
 
          return secureString;
      }
      catch
      {
          throw;
      }
  }
 
  private static string GetSPOAccountName()
  {
      try
      {
          Console.WriteLine(" --> Entered GetSPOAccountName()");
          return ConfigurationManager.AppSettings["SPOAccount"];
      }
      catch
      {
          throw;
      }
   }

You can see in my sample application that I’ve added two helper methods for fetching the Account Name and Account Password from the app.config file. These are explained in the authentication-section further down in this article.

As for the main method, that’s all we need to wire things up to our portal. Before we dig deeper into how we can manipulate SharePoint from our code, let’s discuss options for authentication.

27.2 Authentication considerations

We’ll check out two options for authentication and see how they differ. There may be other options for authentication down the road, but here are two commonly used approaches.

27.2.1 Option 1: Use a Service Account (Username + Password)

This approach is pretty straight forward and enables you to simply enter an account and password to your Office 365 tenant and then use for example CSOM to execute code on your sites. This is what you see in my sample code above as well.

27.2.1.1 Create a new Service Account in Office 365

In order for this to work a specific account should be created that acts as a service account – either for this specific application or a generic service application account that all your jobs and services can use.

For the sake of this demo, I’ve created a new account called “SP WebJob”:
The dashboard shows the newly created SP WebJob account.

Depending on what permissions the job should have, you will have to edit the permissions of the account when you set it up.

27.2.1.2 Store credentials in your app.config

Within your project’s app.config file you can specify the credentials so they’re easily fetchable from the code executable. This is what my app.config looks like:

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
 <startup> 
   <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" />
 </startup>
 <appSettings>
   <add key="SPOAccount" value="spwebjob@redacted.onmicrosoft.com"/>
   <add key="SPOPassword" value="redacted"/>
 </appSettings>
</configuration>

You can see the two settings in the App.config:

  • SPOAccount
  • SPOPassword

If you review the first code snippet, I’m fetching these settings from the app.config file. Just keep in mind that this means storing the account name and password in clear text in your app.config. You need to make a decision in your own projects for how and where to store and protect your passwords, should you choose this approach.

27.2.1.3 The job runs under the specified account

Once the application runs, you will see that it runs using the account specified in the SharePointOnlineCredentials() constructor:

The automatic translation log shows four text translations attributed to SP WebJob.

In my sample above I’m showing a WebJob that is executing actions on a custom list in one of my sites hosted in my SharePoint Online site collection.

Because of this, we can get a pretty good traceability of changes in the portal performed by our service account. This is why its important to name the account wisely – everyone will know that the modifications were done automatically by our service simply by looking at the modified/created metadata.

27.2.2 Option 2: Use OAuth and include authentication tokens in your requests to avoid specifying account/password

This has been explained in great detail by my friend Kirk Evans at Microsoft.

In his post called “Building a SharePoint Add-in as a Timer Job” he explains how you can utilize and pass along the access tokens in order to avoid username/password setups like I explained above, in case you don’t want to store the passwords and credentials in your application.

27.3 Extending the code with some CSOM magic

At this point we have a working Console Application which can authenticate and execute requests to your Office 365 sites. Nothing fancy has been done in the code yet, so here’s a sample snippet for pulling out some information from a list called “Automatic Translations” that I have created, and the code logic will see if there’s any items in the list that haven’t been translated and then it’ll execute a call to a translation-service and translate the text to the desired output language.

static void Main(string[] args)
{
   try
   {
      Console.WriteLine("Initiating Main()");

      using (ClientContext context = new ClientContext("https://redacted.sharepoint.com"))
      {
         Console.WriteLine("New ClientContext('https://redacted.sharepoint.com') opened. ");

         context.AuthenticationMode = ClientAuthenticationMode.Default;
         context.Credentials = new SharePointOnlineCredentials(GetSPOAccountName(), GetSPOSecureStringPassword());

         Console.WriteLine("Authentication Mode and Credentials configured");

         List translationlist = context.Web.Lists.GetByTitle("Automatic Translations");
         context.Load(translationlist);
         context.ExecuteQuery();

         Console.WriteLine("TranslationList fetched, loaded and ExecuteQuery'ed");

         if (translationlist != null && translationlist.ItemCount > 0)
         {
             Console.WriteLine("The list exist, let's do some magic");

             CamlQuery camlQuery = new CamlQuery();
             camlQuery.ViewXml =
             @"<View>  
             <Query> 
                 <Where><Eq><FieldRef Name='IsTranslated' /><Value Type='Boolean'>0</Value></Eq></Where> 
             </Query> 
         </View>";

             ListItemCollection listItems = translationlist.GetItems(camlQuery);
             context.Load(listItems);
             context.ExecuteQuery();

             Console.WriteLine("Query for listItems executed.");

             foreach (ListItem item in listItems)
             {
                 item["Output"] = TranslatorHelper.GetTranslation(item["Title"], item["Target Language"], item["Original Language"]);
                 item["IsTranslated"] = true;
                 item.Update();
             }


             context.ExecuteQuery();
             Console.WriteLine("Updated all the list items we found. Carry on...");
         }
      }
   }
   catch (Exception ex)
   {
       Console.WriteLine("ERROR: " + ex.Message);
       Console.WriteLine("ERROR: " + ex.Source);
       Console.WriteLine("ERROR: " + ex.StackTrace);
       Console.WriteLine("ERROR: " + ex.InnerException);
   }
}

The TranslatorHelper class is a helper class which calls a custom translation API but it will not be discussed in detail in this post since it’s pretty far outside of the scope.

Note: As seen from the code this is a demo and definitely not for production use, please revise it and adjust according to your coding standards and security principles. However all the Console.WriteLine additions are added in order for us to review the execution of the jobs easily from the Azure Portal. More on logging and monitoring further down in this article.

27.4 Publishing your WebJob to Azure

When you’ve developed your WebJob and you’re ready to deploy it to your Azure environment (deploys to an Azure WebSite), you have two main options as described below.

27.4.1 Option 1: Upload a zip file with the WebJob binaries to your Azure Portal

Using the Azure Portal where you keep all of your awesomeness in Azure, you can upload a zip-file containing the output from Visual Studio’s build. This is an easy way for compiling and shipping your code to someone else who will do the deployment for you.

27.4.1.1 Create the zip file

Simply grab all the output files from your Visual Studio build (normally in your bin/Debug or bin/Release folder):

A Windows Explorer view of the bin/Debug folder is displayed.
Compress them so you’ll get a nice Zip file for your web job:

A Windows Explorer view of a completed .zip file is displayed.

27.4.1.2 Find a web site where the job should be deployed

Okay, so you’ve got your package. That’s easy enough. Next step is to head on to https://portal.azure.com and login to your Windows Azure Portal. From there you’ll need to either create a new web site, or use an existing one – this website will be the host for our web job.

In my case, I already have an Azure WebSite for some of my Office 365 demos so I’ll just use that one.

If you scroll down in the settings pane for your website, you’ll find a something called “WebJobs” under the “Operations” header:

The author’s Azure Portal is displayed, with an arrow pointing to WebJobs.
Click where the arrow points!

27.4.1.3 Upload your WebJob

Upload your web job by clicking the [+ Add] sign:

The WebJobs Azure portal is displayed, with an arrow pointing to Add.

Choose a Name, how the job should run and the actual zip file:

The Add WebJob dialog is displayed. The Name field contains the text Zimmergren-O365-WebJobSample, and the How to Run field contains the text On Demand.

Important: The “How To Run” alternative only offers “On Demand” or “Continuous” at this point, but soon there will be support for “Scheduled” as well – which is what we really want.

(Hint: In the next section for publishing directly from Azure, you can schedule it from inside VS).

Okay, done – you can now run your webjob from your Azure Portal:

The WebJobs Azure portal is displayed with the new job list. A context menu appears above the job with the options of Run and Delete.

While this is all fine and dandy, since the portal doesn’t have the dialogs for supporting scheduling just yet – I would urge you to check out how to publish from inside Visual Studio 2015 instead (or 2013, if that’s your choice).

27.4.2 Option 2: Publish directly to Azure from Visual Studio

This is my favorite one at this point because I can use the tooling in Visual Studio to quickly publish any changes directly to my hosted service. The other benefit will become clear soon, as you can also schedule the job exactly how you want it to execute directly from the dialogs in Visual Studio.

27.4.2.1 Choose to publish the WebJob from Visual Studio 2015

Note: These dialogs may differ slightly if you’re running an earlier version of Visual Studio. Also, I am already logged in so if you’re doing this for the first time you may get a login-dialog in order to sign in to your Azure account. That’s a pre-requisite.

Simply right-click your project and select “Publish as an Azure WebJob…”:

The Solution Explorer context menu is displayed with the Publish as Azure WebJob option highlighted.

27.4.2.2 Add Azure WebJob

This will bring you to a new dialog where you can configure the job, and since we want a recurring job that should be executed on a schedule (in my case once every night) you can configure the schedule directly from the dialogs:
The Add Azure WebJob dialog is displayed. The WebJob name field contains the text Zimmergren-O365-WebJobSample, the WebJob run mode field contains the option Run on a Schedule, the Recurrence field contains the option Recurring job and the check box No end date is checked, the Recur every field is set to 1 days, and the Starting on date is 9 Januari 2015.

  • Make sure the name is web friendly
  • Select your run mode, I’m on “Run on a Schedule” because we want to have it occur on a specific time every day
  • Should the job be a recurring job or a one-time job? Since we want to simulate a Timer Job it needs to be recurring, and in my case without any end date since it’ll be running every night
  • You can schedule the recurrence down to every minute, should you want.
  • When do we start? :-)

Hit OK and you’ll see that Visual Studio will drop you a message saying “Installing WebJobs Publishing NuGet Package”.

27.4.2.3 Visual Studio added WebJobs Publishing NuGet Package

The WebJobs NuGet Package Install dialog is displayed which displays a spinner and the text, Installing WebJobs Publishing NuGet Package.

This actually adds a new file called “webjob-publish-settings.json” to our project, containing the configuration for the job.

The file looks like this:

{
  "$schema": "http://schemastore.org/schemas/json/webjob-publish-settings.json",
  "webJobName": "Zimmergren-O365-WebJobSample",
  "startTime": "2015-01-09T01:00:00+01:00",
  "endTime": null,
  "jobRecurrenceFrequency": "Day",
  "interval": 1,
  "runMode": "Scheduled"
}

Right, we don’t need to bother with this file at the moment since we already designed the scheduling using the dialogs.

27.4.2.4 Select publishing/deployment target

The next step in the dialog will be where to publish/deploy your WebJob. You can either import a publishing profile or select Microsoft Azure WebSites in order to authenticate and select one of your existing sites.

Since I’ve got a habit of always downloading my publishing profiles from my Azure Portal, I’ll go ahead and select “Import” and simply specify the publishing profile file that I’ve downloaded from my Azure website:

The dialog Publish Web is displayed with the Connection tab visible.

With that done, all we need to do is click the button called “Publish”. Don’t be afraid, it wont bite. I think.

27.4.2.5 Publish

Once you hit Publish, the Web Publish Activity dialog will display the progress of your Web Job deployment:
The dialog Web Publish Activity is displayed.

Once it’s done, you should see the WebJob in your Azure Portal:
The Azure Portal shows Zimmergren-O365-WebJobSample in the list of WebJobs with the status of, Completed 2 min ago.

The WebJob status is now displayed as Completed. It would say failure/error if it would throw any unhandled exceptions or otherwise provide unhealthy behavior.

It still says “On Demand”, but this job actually runs once every hour now.

27.5 Monitoring the job and reviewing logs

If you’ve done all the previous steps, you’ve got a job working for you as a scheduled task in the cloud, performing actions toward your Office 365 site(s).

27.5.1 View all job executions and status

If you want to review when the job last ran, what the outcome of every execution of the job was or review what happened during execution of the job, you can click on the link under “Logs” when you’re in the WebJobs overview:
The WebJobs dialog, with an arrow pointing to the Logs link.

This will give you an overview of all the executions of the selected jobs, including the status /outcome:

The WebJob Details including Recent job runs.

By clicking the highlighted link, you can dig down into a specific execution to review the logs of the job and make sure things look okay. This is probably more relevant if the job actually caused an error and you needed to investigate what went wrong, or if the outcome of the job is incorrect or not as expected.

You can also see that the Console.WriteLine statements that I so nicely used in my Console Application for this demo now shows up in the job execution log:

The WebJob Details showing the lines in the log file.

27.6 Tips & Tricks

While this can all be done with earlier versions of Visual Studio, I made everything work with Visual Studio 2015. But along the way there were some gotchas, I’m adding them here in case you bump into the same thing.

27.6.1 Exit code -2146232576 problem when running the job

Since I started a Visual Studio 2015 (Preview) project, it started the project up as a Console Application based on .NET Framework 4.5.3.

Running the job locally works fine, since .NET Framework 4.5.3 exist on my dev machine. However, once I deployed the job to My Windows Azure Web Site as a WebJob, it failed with “exit code -2146232576”.

27.6.1.1 Solution: Make sure you’re on the correct .NET version

It took a while before I realized that Azure didn’t like .NET Framework version 4.5.3, but when I changed to .NET Framework 4.5, it works.

If you bump into that problem, just make sure your job is executing under the correct .NET framework version.
Displays the Visual Studio Project Properties page, Application tab, showing the Target framework drop down, highlighting dot NET Framework 4.5.

28 Summary

While there’s not very much to building an Azure WebJob, you can make them quite complex. The overall concept is very straight forward – but then as with all complex projects comes the decisions around authentication, code stability and reliability, high availability scenarios, maintainability and so on. These are variables unique to each project and should be carefully considered before “just deploying” a job to Azure.

28.0.2 Applies to

  • Office 365 Multi Tenant (MT)
  • Office 365 Dedicated (D)
  • SharePoint 2013 on-premises

29 Handle SharePoint Online throttling by using exponential back off

Learn how to handle throttling in SharePoint Online by using the exponential back-off technique.

Applies to: Office 365 | SharePoint Online | SharePoint Server 2013

SharePoint Online uses throttling to prevent users from over-consuming resources. When a user runs CSOM or REST code that exceeds usage limits, SharePoint Online throttles any further request from the user for a period of time.

The Core.Throttling code sample in the Office 365 Developer Patterns and Practices repository shows how to implement the exponential back off technique to handle throttling in SharePoint Online. When you get throttled in SharePoint Online, the exponential back off technique waits progressively longer periods of time before retrying the code that was throttled.

For more information about throttling in SharePoint Online (for example, causes, limits, and so on), and an explanation of the Core.Throttling code sample, see How to: Avoid getting throttled or blocked in SharePoint Online.

Also, in the ClientContextExtensions.cs sample, check out the ExecuteQueryImplementation extension method. ExecuteQueryImplementation is included in OfficeDevPnP.Core.

29.1 Additional resources

<a name=“bk_addresources”> </a>

30 Information management sample add-in for SharePoint

As part of your Enterprise Content Management (ECM) strategy, you can get or set site policies to manage the lifecycle of your SharePoint site.

Applies to: Office 365 | SharePoint 2013 | SharePoint Online

The Core.InformationManagement sample shows you how to use an ASP.NET provider-hosted SharePoint add-in to get and set a site policy on a site. Use this solution if you want to:

  • Apply policy settings during your custom site provisioning process.

  • Create a new or modify an existing site policy.

  • Create a custom expiration formula.

30.1 Before you begin

<a name=“sectionSection0”> </a>

To get started, download the Core.InformationManagement sample add-in from the Office 365 Developer patterns and practices project on GitHub.

We recommend that you create at least one site policy, and assign it to your site before you run this add-in. Otherwise, the add-in will start without displaying sample data. For more information, see Overview of site policies in SharePoint 2013.

30.2 Using the Core.InformationManagement sample app

<a name=“sectionSection1”> </a>

When you start the app, the start page displays the following information, as shown in Figure 1:

  • The site’s closure and expiration dates. These dates are specific to a site and are based on the configuration settings of the site policy that is applied.

  • All site policies that can be applied to the site.

  • The site policy that is currently applied.

  • The option box to select and apply a new site policy to the site.

Figure 1. Information Management add-in start page

Screenshot of the add-in start page, with site policy closure and expiration values, available and applied site policies, and other policies to apply highlighted.

From your SharePoint site, you can go to the app, which runs on the remote host, by choosing Recent > Core.InformationManagement. To return to your SharePoint site, choose Back to Site.

The Pages\Default.aspx.cs file in the Core.InformationManagementWeb project contains the code for the page displayed in Figure 1.

The following code in the Page_Load method of the Default.aspx.cs page fetches and displays the closure and expiration dates of the site, based on the applied site policy. This code calls the GetSiteExpirationDate and GetSiteCloseDate extension methods of the OfficeDevPnP.Core project.

Note The code in this article is provided as-is, without warranty of any kind, either express or implied, including any implied warranties of fitness for a particular purpose, merchantability, or non-infringement.

// Get site expiration and closure dates.
if (cc.Web.HasSitePolicyApplied())
{
        lblSiteExpiration.Text = String.Format("The expiration date for the site is {0}", cc.Web.GetSiteExpirationDate());
        lblSiteClosure.Text = String.Format("The closure date for the site is {0}", cc.Web.GetSiteCloseDate());
}

The following code in the Page_Load method of the Default.aspx.cs page displays the names of all site policies that can be applied to the site (including the currently applied site policy). This code calls the GetSitePolicies extension method of the OfficeDevPnP.Core project.

// List the defined policies.
List<SitePolicyEntity> policies = cc.Web.GetSitePolicies();
string policiesString = "";
foreach (var policy in policies)
                    {
                        policiesString += String.Format("{0} ({1}) <BR />", policy.Name, policy.Description);
                    }
lblSitePolicies.Text = policiesString;
            };

The following code in the Page_Load method of the Default.aspx.cs page displays the name of the site policy currently applied to the site. This calls the GetAppliedSitePolicy extension method of the OfficeDevPnP.Core project.

// Show the assigned policy.
SitePolicyEntity appliedPolicy = cc.Web.GetAppliedSitePolicy();
if (appliedPolicy != null)
            {
            lblAppliedPolicy.Text = String.Format("{0} ({1})", appliedPolicy.Name, appliedPolicy.Description);
            }
else
            {
            lblAppliedPolicy.Text = "No policy has been applied";
            }

The following code in the Page_Load method of the Default.aspx.cs page populates the drop-down list with the site policies that are available, except for the site policy that is currently assigned to the site.

// Fill the policies combo.
foreach (var policy in policies)
{
if (appliedPolicy == null || !policy.Name.Equals(appliedPolicy.Name, StringComparison.InvariantCultureIgnoreCase))
{
                            drlPolicies.Items.Add(policy.Name);
           }
}
btnApplyPolicy.Enabled = drlPolicies.Items.Count > 0;

The following code in the Default.aspx.cs page applies the selected site policy to the site. The original site policy is replaced by the new site policy.

protected void btnApplyPolicy_Click(object sender, EventArgs e)
{
if (drlPolicies.SelectedItem != null)
            {
                cc.Web.ApplySitePolicy(drlPolicies.SelectedItem.Text);
                Page.Response.Redirect(Page.Request.Url.ToString(), true);
            }
}

30.3 Additional resources

<a name=“bk_addresources”> </a>

31 Introducing the PnP Provisioning Engine

Author: Paolo Pialorsi - www.piasys.com - @PaoloPia

Applies to: SharePoint 2013 | SharePoint Online | Office 365

This short whitepaper introduces the PnP Provisioning Engine, which has been release in April 2015 within the OfficeDev PnP project, and which will be updated on a monthly basis, accordingly to the release schedule of the Office Dev PnP Core Library. What you will see here is available thanks to the efforts of some of the Office Dev PnP Core Team members (Vesa Juvonen, Bert Jansen, Frank Marasco, Erwin van Hunen, and me), as well as the whole OfficeDev PnP community.

<a name=“thegoal”> </a>

31.1 The Goal

Let’s start from the main goal of having a provisioning engine. With the introduction of Microsoft Office 365 and Microsoft SharePoint Online, developers are facing the new Cloud Add-in Model (aka CAM) as a new way of creating custom software solutions for Microsoft SharePoint 2013, Microsoft SharePoint Online, and Microsoft Office 365 more in general. However, while in the past developers were used to provision custom artifacts using the CAML/XML-based features framework, either with Full Trust Code (aka FTC) solutions or Sandbox Solutions, now days with the new CAM the approach should be based on provisioning artifacts using the so called “remote provisioning” technique. But what does mean to do “remote provisioning”? It means using the Client Side Object Model (CSOM) to provision artifacts, instead of using the feature framework.

Well, and what if I want to model and provision artifacts using a test and a production environment? Or what if I want to automate provisioning of artifacts, just because I want to sell my customizations to multiple customers? Or again, what if I want to define a custom site template that I want to re-use across multiple site instances, like customer-oriented sites, or project-oriented sites?

Using the new PnP Provisioning Engine, you can model – even simply by using the web browser – the design of Site Columns, Content Types, List Definitions and Instances, Composed Looks, Pages (either WebPart Pages or Wiki Pages), and much more. When you are done with the design, you can export what you have done into a persistent provisioning template format (XML, JSON, or whatever you like), and you can apply that template to as many target sites as you like.

If it sounds interesting … go ahead reading, and let’s learn how to use it!

<a name=“creatingtemplate”> </a>

31.2 Creating a Provisioning Template

As already stated, the easiest way to create a custom provisioning template is to create a fresh new site collection in Microsoft SharePoint Online, to define your artifacts (Composed Look, Site Columns, Content Types, Lists Instances, Pages, Files, etc.) and to save the result as a Provisioning Template.

Thus, let’s say you have defined a sample site with a custom look (custom color theme, custom logo, custom background image). You can see the resulting Home Page in the following figure.

The home page of a template site

Moreover, you have defined a couple of Site Columns, a Content Type and a Library of Invoices with a custom View. In the two following figures you can see the result.

A library of Invoices with a custom content type

The settings page of the Invoices library

In order to export that site as a Provisioning Template, you can use either a bunch of PowerShell Scripting (thanks to the efforts of Erwin!) or some CSOM code, with some extension methods, which are provided by the OfficeDev PnP Core Library.

In order to use the PowerShell extensions, you can simply browse to the proper URL (http://aka.ms/officedevpnpcmdlets16 for Microsoft SharePoint Online, or http://aka.ms/officedevpnpcmdlets15 for Microsoft SharePoint 2013) and install the OfficeDev PnP Core PowerShell extensions. Then, after having connected your PowerShell environment to Microsoft Office 365, by using the Connect-SPOnline cmdlet, you will be able to use the following PowerShell cmdlet:

Get-SPOProvisioningTemplate -Out “PnP-Provisioning-File.xml”

The –Out argument instructs the cmdlet about where to save the Provisioning Template.

On the other side, in order to use the CSOM extensions, you can simply create any kind (Console, Windows, SharePoint Add-in, whatever you like) of .NET software project, and add the OfficeDev PnP NuGet Package. The NuGet Package is available in two flavors: OfficeDev PnP Core V15, which targets Microsoft SharePoint 2013 on-premises, and OfficeDev PnP Core, which targets Microsoft SharePoint Online.

Let’s target the Microsoft SharePoint Online, which so far has been more tested and was the main target of the PnP Core Team efforts. You will simply need to connect to Microsoft Office 365, create a ClientContext instance and retrieve a reference to a Web object. Thanks to a new extension method, called GetProvisioningTemplate, you will be able to retrieve a ProvisioningTemplate object that can be saved using a template provider and a serialization formatter. Both the template provider and the serialization formatter objects can be customized, so that you can implement whatever persistence storage and serialization format you like. Out of the box, the PnP Provisioning Engine provides support for File System, SharePoint, and Azure Blob Storage template providers, as well as for XML and JSON serialization formatters. In the following figure (credits to Vesa) you can see an outline of the overall architecture of the PnP Provisioning Engine.

The architecture of the PnP Provisioning Engine Framework

The result of extracting and saving a ProvisioningTemplate instance object will be for instance an XML file like the one shown in the following XML code excerpt:

<?xml version="1.0"?>
<pnp:Provisioning xmlns:pnp="http://schemas.dev.office.com/PnP/2015/05/ProvisioningSchema">
  <pnp:Preferences Generator="OfficeDevPnP.Core, Version=1.2.515.0, Culture=neutral, PublicKeyToken=null" />
  <pnp:Templates ID="CONTAINER-TEMPLATE-1D3F60898418437E8B275147BEC7B0F5">
<pnp:ProvisioningTemplate ID="TEMPLATE-1D3F60898418437E8B275147BEC7B0F5" Version="1">
  <pnp:Security>
<pnp:AdditionalAdministrators>
  <pnp:User Name="i:0#.f|membership|paolo@piasysdev.onmicrosoft.com" />
</pnp:AdditionalAdministrators>
  </pnp:Security>
  <pnp:Files>
<pnp:File Src="PnP.png" Folder="SiteAssets" Overwrite="true" />
<pnp:File Src="STB13_Rick_01_small.png" Folder="SiteAssets" Overwrite="true" />
  </pnp:Files>
  <pnp:SiteFields>
<Field Type="DateTime" DisplayName="Invoice Date" Required="FALSE" EnforceUniqueValues="FALSE"
    Indexed="FALSE" Format="DateOnly" Group="PnP Columns" FriendlyDisplayFormat="Disabled"
    ID="{f1c6f202-f976-4f4e-b0a3-8b984991d00d}" SourceID="{5a15b9ca-4410-4854-bc61-d7fb0ff84e56}"
    StaticName="PnPInvoiceDate" Name="PnPInvoiceDate" CalType="0">
  <Default>[today]</Default>
</Field>
<Field Type="Text" DisplayName="Invoice Number" Required="FALSE" 
    EnforceUniqueValues="FALSE" Indexed="FALSE" MaxLength="20" Group="PnP Columns"
    ID="{5049a822-424c-4479-9648-79c4b3214375}" SourceID="{5a15b9ca-4410-4854-bc61-d7fb0ff84e56}"
    StaticName="PnPInvoiceNumber" Name="PnPInvoiceNumber">
</Field>
  </pnp:SiteFields>
  <pnp:ContentTypes>
<pnp:ContentType ID="0x01010097931365769EE34E9078576A150FF52E" Name="Invoice"
    Description="" Group="PnP Content Types">
  <pnp:FieldRefs>
<pnp:FieldRef ID="5049a822-424c-4479-9648-79c4b3214375" Name="PnPInvoiceNumber" />
<pnp:FieldRef ID="f1c6f202-f976-4f4e-b0a3-8b984991d00d" Name="PnPInvoiceDate" />
  </pnp:FieldRefs>
</pnp:ContentType>
  </pnp:ContentTypes>
  <pnp:Lists>
<pnp:ListInstance Title="Invoices" Description=""
    DocumentTemplate="{site}/Invoices/Forms/template.dotx" TemplateType="101" Url="Invoices"
    EnableVersioning="true" MinorVersionLimit="0" MaxVersionLimit="500"
    TemplateFeatureID="00bfea71-e717-4e80-aa17-d0c71b360101" ContentTypesEnabled="true"
    EnableAttachments="false">
  <pnp:ContentTypeBindings>
<pnp:ContentTypeBinding ContentTypeID="0x01010097931365769EE34E9078576A150FF52E" Default="true" />
  </pnp:ContentTypeBindings>
  <pnp:Views>
<View Name="{3D715498-8FA2-4B80-8D35-885B2A4CCBDE}" MobileView="TRUE" MobileDefaultView="TRUE"
        Type="HTML" DisplayName="All Documents"
        Url="/sites/PnPProvisioningDemo/Invoices/Forms/AllItems.aspx" Level="1"
        BaseViewID="1" ContentTypeID="0x" ImageUrl="/_layouts/15/images/dlicon.png?rev=38">
  <Query>
<OrderBy>
  <FieldRef Name="FileLeafRef" />
</OrderBy>
  </Query>
  <ViewFields>
<FieldRef Name="DocIcon" />
<FieldRef Name="LinkFilename" />
<FieldRef Name="Modified" />
<FieldRef Name="Editor" />
  </ViewFields>
  <RowLimit Paged="TRUE">30</RowLimit>
  <JSLink>clienttemplates.js</JSLink>
  <XslLink Default="TRUE">main.xsl</XslLink>
  <Toolbar Type="Standard" />
</View>
<View Name="{D9BC935E-2154-47EE-A9E2-7C9490389007}" DefaultView="TRUE" MobileView="TRUE"
        Type="HTML" DisplayName="All Invoices"
        Url="/sites/PnPProvisioningDemo/Invoices/Forms/All Invoices.aspx" Level="1"
        BaseViewID="1" ContentTypeID="0x" ImageUrl="/_layouts/15/images/dlicon.png?rev=38">
  <Query>
<OrderBy>
  <FieldRef Name="FileLeafRef" />
</OrderBy>
  </Query>
  <ViewFields>
<FieldRef Name="DocIcon" />
<FieldRef Name="LinkFilename" />
<FieldRef Name="Modified" />
<FieldRef Name="Editor" />
<FieldRef Name="PnPInvoiceDate" />
<FieldRef Name="PnPInvoiceNumber" />
  </ViewFields>
  <RowLimit Paged="TRUE">30</RowLimit>
  <Aggregations Value="Off" />
  <JSLink>clienttemplates.js</JSLink>
  <XslLink Default="TRUE">main.xsl</XslLink>
  <Toolbar Type="Standard" />
</View>
  </pnp:Views>
  <pnp:FieldRefs>
<pnp:FieldRef ID="5049a822-424c-4479-9648-79c4b3214375" Name="PnPInvoiceNumber"
        DisplayName="Invoice Number" />
<pnp:FieldRef ID="f1c6f202-f976-4f4e-b0a3-8b984991d00d" Name="PnPInvoiceDate"
        DisplayName="Invoice Date" />
  </pnp:FieldRefs>
</pnp:ListInstance>
  </pnp:Lists>
  <pnp:Features />
  <pnp:CustomActions />
  <pnp:ComposedLook
BackgroundFile="{sitecollection}/SiteAssets/STB13_Rick_01_small.png"
ColorFile="{sitecollection}/_catalogs/theme/15/Palette012.spcolor"
SiteLogo="{sitecollection}/SiteAssets/PnP.png"
Name="RED"
MasterPage="{sitecollection}/_catalogs/masterpage/seattle.master"
FontFile="" />
</pnp:ProvisioningTemplate>
  </pnp:Templates>
</pnp:Provisioning>

As you can see, the XML elements are almost self-explanatory. The XML schema used in the example references the 201505 version of the PnP Provisioning Schema (XML Namespace: *http://schemas.dev.office.com/PnP/2015/05/ProvisioningSchema*), which has been defined together with the whole OfficeDev PnP Community, and which can be found on GitHub at the following URL: https://github.com/SharePoint/Pnp-Provisioning-Schema/. Within the same repository, you will also find a markdown (MD) auto-generated document, which describes the main elements, types and attributes available to manually define an XML provisioning template.

However, the real power of this provisioning engine is the availability of a high level and serialization format independent Domain Model. In fact, internally the PnP Provisioning Engine is completely decoupled from any kind of serialization format, and the whole engine simply handles instances of the ProvisioningTemplate type. For instance, in the following figure you can see the “Quick Watch” window of Microsoft Visual Studio 2013 showing a ProvisioningTemplate object instance.

The structure - within a debugger watch - of a ProvisioningTemplate object

It is up to you to define the ProvisioningTemplate manually, using a model site, or by composing an XML document that has to be valid against the PnP Provisioning XSD Schema, or by simply writing .NET code and constructing the hierarchy of objects. You can even do a mix of those approaches: you can design the provisioning template using a model site, then you can save it into an XML file and do some in-memory customizations, while handling the ProvisioningTemplate instance in your code.

<a name=“applyingtemplate”> </a>

31.3 Applying a Provisioning Template

Now that you have seen what a Provisioning Template is, and how to extract the Domain Model object from an existing site, you are ready to apply it to a target site. Let’s say that you have another fresh new Site, which for instance is the root site of a new Site Collection in Microsoft SharePoint Online that has been create using the Team Site template, like it is shown in the following figure.

The SharePoint Online page for creating a new site collection

By default, the site will look like the following figure, which is the default layout of a SharePoint Online site.

The home page of a fresh new target

One more time, you can apply a custom ProvisioningTemplate instance object either by using a bunch of PowerShell scripting, or by writing some .NET code. If you want to use PowerShell, in the following excerpt you can see the Apply-SPOProvisioningTemplate cmdlet in action.

Apply-SPOProvisioningTemplate -Path “PnP-Provisioning-File.xml”

The –Path argument refers to the source template file, and the cmdlet will automatically apply to the currently connected site (implied by the Connect-SPOnline cmdlet). In the following figure you can see the final result.

The home page of a target site based on a Provisioning Templated

As you can see, the site has the same look as the original template, and it includes the Invoices library, with all the provisioning stuff under the cover (Site Columns, Content Types, etc.).
And what about using .NET code? Here is an excerpt about how to use CSOM and the OfficeDev PnP Core Library extension methods to apply the template.

using (var context = new ClientContext(destinationUrl))
{
  context.Credentials = new SharePointOnlineCredentials(userName, password);
  Web web = context.Web;
  context.Load(web, w => w.Title);
  context.ExecuteQueryRetry();

  // Configure the XML file system provider
  XMLTemplateProvider provider =
  new XMLFileSystemTemplateProvider(
    String.Format(@"{0}\..\..\",
    AppDomain.CurrentDomain.BaseDirectory),
    "");

  // Load the template from the XML stored copy
  ProvisioningTemplate template = provider.GetTemplate(
    "PnP-Provisioning-Demo-201505-Polished.xml");

  // Apply the template to another site
  Console.WriteLine("Start: {0:hh.mm.ss}", DateTime.Now);

  // We can also use Apply-SPOProvisioningTemplate
  web.ApplyProvisioningTemplate(template);
 
  Console.WriteLine("End: {0:hh.mm.ss}", DateTime.Now);
}

You simply need to create an instance of a Template Provider object, depending on what kind of persistence you will use to save and load the template. You will have to load the template from the source repository, by using the GetTemplate method. Lastly, you will apply the template to the target site, using the ApplyProvisioningTemplate extension method of the Web type.

On an average, the library will take around a couple of minutes to apply the template, regardless you are using PowerShell, .NET or whatever else. If you want, you can register a delegate to monitor the overall process, while the provisioning is in progress. We are still improving performances of the engine, and so far we have focused our attention on capabilities and functionalities.

<a name=“advancedtopics”> </a>

31.4 Advanced Topics

This is just an introductory article, in the near future we will go deeper about some more advanced topics. Nevertheless, it is important to underline that using the new PnP Provisioning Engine you can also provision Taxonomies, you can use variables and tokens, which can be replaced at runtime, based on what you are provisioning (List IDs, Parameters, Terms’ IDs, etc.). You can invoke the provisioning engine from timer job services, provider hosted add-ins, external sites, or whatever else. Lastly, you can use the PnP Provisioning Engine to move artifacts from test/staging environments to production environments.

Moreover, on Channel 9 there is a section dedicated to OfficeDev PnP, where you can watch some videos about the PnP Provisioning Engine and the PnP PowerShell Extensions:

<a name=“wrapup”> </a>

31.5 Requirements and Wrap Up

In order to play with the PnP Provisioning Engine on-premises, you need to have at least the SharePoint 2013 March 2015 Cumulative Update installed. In fact, the engine leverages some new capabilities of the Client Side Object Model , which are not available in previous versions of the product. If you target Microsoft SharePoint Online, the requirements are automatically satisfied thanks to the Software as a Service model.

Please, play with the PnP Provisioning Engine, give us feedbacks, and enjoy the future of the SharePoint Add-in Model and the remote provisioning!

<a name=“bk_addresources”> </a>

31.6 Additional resources

32 Localize UI elements sample add-in for SharePoint

You can localize SharePoint UI elements by using JavaScript to replace the text value of a UI element value with a translated text value loaded from a JavaScript resource file.

Applies to: Office 365 | SharePoint 2013 | SharePoint Online

The Core.JavaScriptCustomization sample add-in shows you how to use JavaScript to replace the text value of a SharePoint UI element with a translated text value, which is read from a JavaScript resource file.

Note You are responsible for maintaining the translated text values in the JavaScript resource file.

This code sample uses a provider-hosted add-in to:

  • Localize a site page or Quick Launch link title with specific text values.

  • Preserve a site page or Quick Launch link title in a primary language, and provide translated versions of the site page and Quick Launch link title in another language at run time.

  • Use JavaScript resource files for client side localization.

  • Link a JavaScript file to a SharePoint site using a custom action.

  • Check the UI culture of the site and then load culture-specific text values from a JavaScript resource file.

  • Overwrite site page and Quick Launch link titles with culture-specific text values using jQuery.

32.1 Before you begin

<a name=“sectionSection0”> </a>

To get started, download the Core.JavaScriptCustomization sample add-in from the Office 365 Developer patterns and practices project on GitHub.

Before you run this code sample, configure the language settings on your site, and set the display language on your user’s profile page.

32.1.1 To configure the language settings on your site

  1. On your team site, choose Settings > Site settings.

  2. In Site Administration, choose Language settings.

  3. On the Language Settings page, in Alternate language(s), choose the alternate languages your site should support. For example, choose French and Finnish, as shown in Figure 1.

  4. Choose OK.

32.1.2 To set the display language on your user’s profile page

  1. At the top of your Office 365 site, choose your profile picture, and then choose About me, as shown in Figure 2.

  2. On the About me page, choose edit your profile.

  3. Choose the ellipsis (…) for additional options, and then choose Language and Region.

  4. In My Display Languages, choose a new language in the Pick a new language dropdown, then choose Add. For example, choose French and Finnish, as shown in Figure 3. You might need to move your preferred language up or down by choosing the up and down arrows.

  5. Choose Save all and close.

Note It might take a few minutes for your site to render in the selected language(s).

Important The CSOM is periodically updated with new features. If the CSOM provides new features to update site page or Quick Launch link titles, we recommend that you use the new features in the CSOM instead of the options discussed here.

Figure 1. Setting the language for a site

Screenshot of the Language Settings page of Site Settings

Figure 2. Navigating to a user’s profile page by choosing About me

Screenshot of the user profile page with About me highlighted

Figure 3. Changing a user’s display language settings on the user’s profile page

Screenshot of the Language and Region section of the Edit Details page

Before you run Scenario 2 of this code sample, complete the following tasks.

  1. On the host web, choose EDIT LINKS.

  2. Choose link, as shown in Figure 4.

  3. In Text to display, enter My quicklaunch entry.

  4. In Address, enter the URL of a website.

  5. Choose OK > Save.

Figure 4. Adding a link to the Quick Launch

Screenshot of the EDIT LINKS page, with link highlighted

32.1.4 To create a site page

  1. On the host web, choose Site Contents > Site Pages > new.

  2. In New page name, enter Hello SharePoint.

  3. Choose Create.

  4. Enter Test page in the body of the page.

  5. Choose Save.

32.2 Using the Core.JavaScriptCustomization sample app

<a name=“sectionSection1”> </a>

When you run this code sample, a provider-hosted application appears, as shown in Figure 5. This article describes Scenario 1 and Scenario 2 because you might use the techniques in Scenario 1 and Scenario 2 to provide localized versions of your site page and Quick Launch link titles.

Figure 5. Start page of the Core.JavaScriptCustomization app

Screenshot showing the Start page of the Core.JavaScriptCustomization app

32.2.1 Scenario 1

Scenario 1 shows how to add a reference to a JavaScript file on a SharePoint site using a custom action. Choosing the Inject customization button calls the btnSubmit_Click method in scenario1.aspx.cs. The btnSubmit_Click method calls AddJsLink to add references to JavaScript files using a custom action on the host web.

Figure 6 shows the start page for Scenario 1.

Figure 6. Scenario 1 start page

Screenshot of the start page for Scenario 1

The AddJSLink method is part of the JavaScriptExtensions.cs file in OfficeDevPnP.Core. AddJSLink requires that you supply a string representing the identifier to assign to the custom action, and a string containing a semicolon delimited list of URLs to the JavaScript files that you want to add to the host web. Note that this code sample adds a reference to Scripts\scenario1.js, which adds a status bar message to the host web.

Note The code in this article is provided as-is, without warranty of any kind, either express or implied, including any implied warranties of fitness for a particular purpose, merchantability, or non-infringement.

protected void btnSubmit_Click(object sender, EventArgs e)
        {
            var spContext = SharePointContextProvider.Current.GetSharePointContext(Context);
            using (var cc = spContext.CreateUserClientContextForSPHost())
            {
                cc.Web.AddJsLink(Utilities.Scenario1Key, Utilities.BuildScenarioJavaScriptUrl(Utilities.Scenario1Key, this.Request));
            }
        }

Note SharePoint 2013 uses Minimal Download Strategy to reduce the amount of data the browser downloads when users navigate between pages on a SharePoint site. For more information, see Minimal Download Strategy overview. In scenario1.js, the following code ensures that whether or not Minimal Download Strategy is used on your SharePoint site, the RemoteManager_Inject method is always called to run the JavaScript code to add the status bar message to the host web.

if ("undefined" != typeof g_MinimalDownload &amp;&amp; g_MinimalDownload &amp;&amp; (window.location.pathname.toLowerCase()).endsWith("/_layouts/15/start.aspx") &amp;&amp; "undefined" != typeof asyncDeltaManager) {
    // Register script for MDS if possible.
    RegisterModuleInit("scenario1.js", RemoteManager_Inject); //MDS registration
    RemoteManager_Inject(); //non MDS scenario
} else {
    RemoteManager_Inject();
}

Note Some JavaScript files may depend on other JavaScript files to be loaded first, before they can run and complete successfully. The following code construct from RemoteManager_Inject uses the loadScript function in scenario1.js to first load jQuery, then continue running the remaining JavaScript code.

var jQuery = "https://ajax.aspnetcdn.com/ajax/jQuery/jquery-2.0.2.min.js";

    // Load jQuery first, then continue running the rest of the code.
    loadScript(jQuery, function () {
     // Add additional JavaScript code here to complete your task. 
});

Choose Back to Site. As shown in Figure 7, the host web now displays a status bar message that was added by scenario1.js.

Figure 7. Status bar message added to a team site using JavaScript

Screenshot of the status bar message added to a team site by using JavaScript

32.2.2 Scenario 2

<a name=“bk_Scenario2”> </a>

Scenario 2 uses the technique described in Scenario 1 to replace UI text with translated text read from a JavaScript resource file. Scenario 2 replaces the Quick Launch link title ( My quicklaunch entry) and site page title ( Hello SharePoint) that you created earlier. Scenario 2 attaches a JavaScript file which reads translated text values from variables in culture-specific JavaScript resource files. Scenario 2 then updates the UI. Figure 8 shows the start page for Scenario 2.

Figure 8. Scenario 2 start page

Screenshot of the start page for Scenario 2

As shown in Figure 8, choosing Inject customization applies the following changes to the site:

  • The Quick Launch link title My quicklaunch entry is changed to Contoso link.

  • The Hello SharePoint site page title is changed to Contoso page.

Figure 9. Scenario 2 customizations

Scenario 2 customizations

Note If your values for the Quick Launch link title and site page title differ from those shown in Figure 8, edit the quickLauch_Scenario2 and pageTitle_HelloSharePoint variables in the JavaScript resource files scenario2.en-us.js or scenario2.nl-nl.js. Then run the code sample again. The scenario2.en-us.js file stores English (US) culture-specific resources. The scenario2.nl-nl.js file stores Dutch culture-specific resources. If you are testing this code sample using another language, consider creating another JavaScript resource file using the same naming convention.

Similar to Scenario 1, btnSubmit_Click in scenario2.aspx.cs calls AddJsLink to add a reference to the Scripts\scenario2.js file. In scenario2.js, the RemoteManager_Inject function calls the TranslateQuickLaunch function, which performs the following tasks:

  • Determines the site’s culture using **_spPageContextInfo.currentUICultureName**.

  • Loads the JavaScript resource file containing culture specific resources that match the UI culture of the site. For example, if the site’s culture was English (United States), the scenario2.en-us.js file is loaded.

  • Replaces my quicklaunch entry with the value of the quickLauch_Scenario2 variable read from the JavaScript resource file.

function RemoteManager_Inject() {

    var jQuery = "https://ajax.aspnetcdn.com/ajax/jQuery/jquery-2.0.2.min.js";
    
    loadScript(jQuery, function () {
        SP.SOD.executeOrDelayUntilScriptLoaded(function () { TranslateQuickLaunch(); }, 'sp.js');
    });
}

function TranslateQuickLaunch() {
    // Load jQuery and if complete, load the JS resource file.
    var scriptUrl = "";
    var scriptRevision = "";
    // iterate over the scripts loaded on the page to find the scenario2 script. Then use the script URL to dynamically build the URL for the resource file to be loaded.
    $('script').each(function (i, el) {
        if (el.src.toLowerCase().indexOf('scenario2.js') > -1) {
            scriptUrl = el.src;
            scriptRevision = scriptUrl.substring(scriptUrl.indexOf('.js') + 3);
            scriptUrl = scriptUrl.substring(0, scriptUrl.indexOf('.js'));
        }
    })

    var resourcesFile = scriptUrl + "." + _spPageContextInfo.currentUICultureName.toLowerCase() + ".js" + scriptRevision;
    // Load the JS resource file based on the user's language settings.
    loadScript(resourcesFile, function () {

        // General changes that apply to all loaded pages.
        // ----------------------------------------------

        // Update the Quick Launch labels.
        // Note that you can use the jQuery  function to iterate over all elements that match your jQuery selector.
        $("span.ms-navedit-flyoutArrow").each(function () {
            if (this.innerText.toLowerCase().indexOf('my quicklaunch entry') > -1) {
                // Update the label.
                $(this).find('.menu-item-text').text(quickLauch_Scenario2);
                // Update the tooltip.
                $(this).parent().attr("title", quickLauch_Scenario2);
            }
        });

        // Page specific changes require an IsOnPage call.
        // ----------------------------------------------------------

        // Change the title of the "Hello SharePoint" page.
        if (IsOnPage("Hello%20SharePoint.aspx")) {
            $("#DeltaPlaceHolderPageTitleInTitleArea").find("A").each(function () {
                if ($(this).text().toLowerCase().indexOf("hello sharepoint") > -1) {
                    // Update the label.
                    $(this).text(pageTitle_HelloSharePoint);
                    // Update the tooltip.
                    $(this).attr("title", pageTitle_HelloSharePoint);
                }
            });
        }

    });
}

32.3 Additional resources

<a name=“bk_addresources”> </a>

33 Migrate InfoPath forms to SharePoint 2013

Migrate InfoPath forms in your SharePoint add-ins to other supported solutions, such as Access applications, sandbox solutions, or the add-in model.

Applies to: Office 365 | SharePoint 2013 | SharePoint Online

If you’re using InfoPath as the basis for creating forms in your add-ins, now is the time to start thinking about migrating your forms to other solutions. Although InfoPath is currently supported, InfoPath 2013 is the last release of the desktop InfoPath client, and InfoPath Forms Services in SharePoint 2013 is the last release of InfoPath Forms Services. The client and the on-premises version of InfoPath Forms Services in SharePoint 2013 will be fully supported until 2023. The forms service will be supported in Office 365 until at least the next major release of Office.

To replace your InfoPath forms, you can choose one of the following alternatives:

  • Use Access applications.

  • Use Microsoft Flow and Microsoft PowerApps.

  • Move complex behaviors to new add-in model and client side developments.

We recommend the first two solutions, because information workers who don’t know how to write and deploy code-based alternatives can implement them. The following table describes the scenarios for which each alternative is best suited.

Alternatives to InfoPath in SharePoint 2013

Alternative Scenario
Access applications This option supports multiple forms that handle relational data contained in multiple Access tables, Excel tables, and/or SharePoint lists.
Use Microsoft Flow and Microsoft PowerApps This is the our recommended approach for extending lists by SharePoint power users .
New add-in model and client side developments You can convert complex forms driven by extensive code into provider-hosted add-ins or client side web parts. This option requires developer resources.

33.1 Additional resources

<a name=“bk_addresources”> </a>

34 Migrate user profile properties sample add-in for SharePoint

You can use a provider-hosted add-in to migrate and import SharePoint user profile data.

Applies to: Office 365 | SharePoint 2013 | SharePoint Online

The Core.ProfileProperty.Migration sample add-in shows you how to migrate user profile data from SharePoint Server 2010 or SharePoint Server 2013 into SharePoint Online.

This sample includes two console applications. Both use the userprofileservice.asmx web service to extract single and multivalued user profile data to an XML file, and to import the extracted data into the user profile service in SharePoint Online.
Use this code sample if you want to:

  • Extract user profile data in SharePoint Server 2010 or SharePoint Server 2013.

  • Import user profile data into SharePoint Online.

34.1 Before you begin

<a name=“sectionSection0”> </a>

To get started, download the Core.ProfileProperty.Migration sample add-in from the Office 365 Developer patterns and practices project on GitHub. The code sample contains two projects.

For the Contoso.ProfileProperty.Migration.Extract project:

  • Because this code sample uses the server-side object model, be sure that you are running the project on a server with SharePoint Server 2010 or SharePoint Server 2013 installed.

  • Use an account that has SharePoint farm administrator permissions.

  • Edit the App.config file using the configuration information listed in Table 1.

  • For all users, ensure that the Work email user profile property is not empty. If the value of the Work email user profile property is empty, the extraction process will end prematurely.

  • This code sample extracts user profiles from SharePoint Server 2010. If you are extracting user profiles from SharePoint Server 2013, do the following:

a. Open the shortcut menu (right-click) for Contoso.ProfileProperty.Migration.Extract > Properties.

b. Under Application, in Target framework choose .NET Framework 4.

c. Choose Yes, then Save.

Table 1. Configuration settings for App.Config file

Configuration setting name Description Example
MYSITEHOSTURL My Site URL on the source SharePoint Server 2010 or SharePoint Server 2013 farm. http://my.contoso.com
PROPERTYSEPERATOR The character used to separate multiple values in a multivalued user profile property.
USERPROFILESSTORE The XML file path to use to write extracted user profile data. C:\temp\ProfileData.xml
LOGFILE The XML file path to use to write extracted user profile data. C:\temp\Extract.log
ENABLELOGGING Enable disk logging. True
TESTRUN Performs a test extraction to confirm that your configuration settings in App.Config are correct. Set TESTRUN=true if you are performing a test extraction. The test run extracts only one user from the user profile service.<br /> Set TESTRUN=false if you are extracting all users from the user profile service.

For the Contoso.ProfileProperty.Migration.Import project

  • Ensure that user profiles exist in Office 365.

  • Ensure that the user’s Work email address is the same in the SharePoint Server 2013 on-premises and Office 365 user profile service.

  • In the App.config file, change the value element of the Contoso_ProfileProperty_Migration_Import_UPSvc_UserProfileService setting to include a reference to the user profile service in your SharePoint Online admin center, as shown in the following example.

    <applicationSettings>
    <Contoso.ProfileProperty.Migration.Import.Properties.Settings>
    <setting name="Contoso_ProfileProperty_Migration_Import_UPSvc_UserProfileService" serializeAs="String">
    <value>https://contoso-admin.sharepoint.com/_vti_bin/userprofileservice.asmx</value>
    </setting>
    </Contoso.ProfileProperty.Migration.Import.Properties.Settings>
    </applicationSettings>
  • Edit the App.config file using the configuration settings listed in Table 2.

Table 2. App.config file configuration settings

Configuration setting name Description Example
tenantName This is your tenant’s name. If your tenant URL is http://contoso.onmicrosoft.com, enter contoso as your tenant name.
PROPERTYSEPERATOR The character used to separate values in a multivalued user profile property.
USERPROFILESSTORE The XML file to use to read extracted user profile data. C:\temp\ProfileData.xml
LOGFILE Log file used for event logging. C:\temp\Extract.log
ENABLELOGGING Enable disk logging. True
SPOAdminUserName An Office 365 administrator’s username. Not applicable.
SPOAdminPassword An Office 365 administrator’s password. Not applicable.

34.2 Using the Core.ProfileProperty.Migration app

<a name=“sectionSection1”> </a>

This code sample runs as a console application. When the code sample runs, the Main function in Program.cs performs the following tasks:

  • Connects to the My Site Host and uses UserProfileManager to connect to the user profile service. UserProfileManager belongs to the Microsoft.Office.Server.UserProfiles.dll assembly.

  • Creates a list called pData to store extracted user profile data.

  • For all users in the user profile service it does the following:

    • Uses GetSingleValuedProperty to copy the WorkEmail and AboutMe user profile properties to a UserProfileData object called userData.

    • Uses GetMultiValuedProperty to copy the SPS-Responsibility user profile property to userData.

  • Uses UserProfileCollection.Save to serialize userData to an XML file. The XML file is saved at the file path you specified in App.config.

Note The code in this article is provided as-is, without warranty of any kind, either express or implied, including any implied warranties of fitness for a particular purpose, merchantability, or non-infringement.

static void Main(string[] args)
        {
            int userCount = 1;

            try
            {

                if (Convert.ToBoolean(ConfigurationManager.AppSettings["TESTRUN"]))
                {
                    LogMessage(string.Format("******** RUNNING IN TEST RUN MODE **********"), LogLevel.Debug);
                }
                
                LogMessage(string.Format("Connecting to My Site Host: '{0}'...", ConfigurationManager.AppSettings["MYSITEHOSTURL"]), LogLevel.Info);
                using (SPSite mySite = new SPSite(ConfigurationManager.AppSettings["MYSITEHOSTURL"]))
                {
                    LogMessage(string.Format("Connecting to My Site Host: '{0}'...Done!", ConfigurationManager.AppSettings["MYSITEHOSTURL"]), LogLevel.Info);

                    LogMessage(string.Format("getting Service Context..."), LogLevel.Info);
                    SPServiceContext svcContext = SPServiceContext.GetContext(mySite);
                    LogMessage(string.Format("getting Service Context...Done!"), LogLevel.Info);

                    LogMessage(string.Format("Connecting to Profile Manager..."), LogLevel.Info);
                    UserProfileManager profileManager = new UserProfileManager(svcContext);
                    LogMessage(string.Format("Connecting to Profile Manager...Done!"), LogLevel.Info);

                    // Size of the List is set to the number of profiles.
                    List<UserProfileData> pData = new List<UserProfileData>(Convert.ToInt32(profileManager.Count));
                    
                    // Initialize Serialization Class.
                    UserProfileCollection ups = new UserProfileCollection();

                    foreach (UserProfile spUser in profileManager)
                    {
                        // Get Profile Information.
                        LogMessage(string.Format("processing user '{0}' of {1}...", userCount,profileManager.Count),LogLevel.Info);                       
                        UserProfileData userData = new UserProfileData();
                        
                        userData.UserName = GetSingleValuedProperty(spUser, "WorkEmail");
                        
                        if (userData.UserName != string.Empty)
                        {
                            userData.AboutMe = GetSingleValuedProperty(spUser, "AboutMe");
                            userData.AskMeAbout = GetMultiValuedProperty(spUser, "SPS-Responsibility");
                            pData.Add(userData);
                            // Add to Serialization Class List of Profiles.
                            ups.ProfileData = pData;
                        }
                        
                        LogMessage(string.Format("processing user '{0}' of {1}...Done!", userCount++, profileManager.Count), LogLevel.Info);

                        // Only process the first item if we are in test mode.
                        if (Convert.ToBoolean(ConfigurationManager.AppSettings["TESTRUN"]))
                        {
                            break;
                        }

                    }
                    
                    // Serialize profiles to disk.
                    ups.Save();

                }
            }
            catch(Exception ex)
            {
                LogMessage("Exception trying to get profile properties:\n" + ex.Message, LogLevel.Error);
            }

Note that the GetSingleValuedProperty method uses userprofileservice.asmx to retrieve a single-valued user profile property. GetSingleValuedProperty does the following, as shown in the next code example:

  • Gets the property object to extract data from using spuser[userProperty].

  • Returns the first value in the UserProfileValueCollection if the value is not null.

private static string GetSingleValuedProperty(UserProfile spUser,string userProperty)
        {
            string returnString = string.Empty;
            try
            {
                UserProfileValueCollection propCollection = spUser[userProperty];

                if (propCollection[0] != null)
                {
                    returnString = propCollection[0].ToString();
                }
                else
                {
                    LogMessage(string.Format("User '{0}' does not have a value in property '{1}'", spUser.DisplayName, userProperty), LogLevel.Warning);                       
                }
            }
            catch 
            {
                LogMessage(string.Format("User '{0}' does not have a value in property '{1}'", spUser.DisplayName, userProperty), LogLevel.Warning);                       
            }


            return returnString;
            
        }

Note that the GetMultiValuedProperty method uses userprofileservice.asmx to retrieve a multivalued user profile property. GetMultiValuedProperty does the following, as shown in the next code example:

  • Gets the user profile property object to update using spuser[userProperty].

  • Builds a string of user profile property values separated by the PROPERTYSEPARATOR specified in the App.config file.

private static string GetMultiValuedProperty(UserProfile spUser, string userProperty)
        {
            StringBuilder sb = new StringBuilder("");
            string seperator = ConfigurationManager.AppSettings["PROPERTYSEPERATOR"];

            string returnString = string.Empty;
            try
            {

                UserProfileValueCollection propCollection = spUser[userProperty];

                if (propCollection.Count > 1)
                {
                    for (int i = 0; i < propCollection.Count; i++)
                    {
                        if (i == propCollection.Count - 1) { seperator = ""; }
                        sb.AppendFormat("{0}{1}", propCollection[i], seperator);
                    }
                }
                else if (propCollection.Count == 1)
                {
                    sb.AppendFormat("{0}", propCollection[0]);
                }

            }
            catch
            {
                LogMessage(string.Format("User '{0}' does not have a value in property '{1}'", spUser.DisplayName, userProperty), LogLevel.Warning);
            }

            return sb.ToString();

        }

34.3 Using Contoso.ProfileProperty.Migration.Import

<a name=“sectionSection2”> </a>

This code sample runs as a console application. When the code sample runs, the Main method in Program.cs does the following:

  • Initializes the console application using InitializeConfiguration and InitializeWebService.

  • Deserializes the XML file containing the extracted user profile data.

  • For all users in the XML file it does the following:

    • Extracts the UserName property from the XML file.

    • Uses SetSingleMVProfileProperty to set SPS-Responsibility on the user’s profile.

    • Uses SetSingleMVProfileProperty to set AboutMe on the user’s profile.

InitializeWebService connects to SharePoint Online, and sets a reference of the user profile service to an instance variable. Other methods in this code sample use this instance variable to write values to user profile properties. To administer the user profile, this code sample uses the userprofileservice.asmx web service on the SharePoint Online admin center.

static bool InitializeWebService()
        {
            try
            {
                string webServiceExt = "_vti_bin/userprofileservice.asmx";
                string adminWebServiceUrl = string.Empty;
                
                if (_profileSiteUrl.EndsWith("/"))
                    adminWebServiceUrl = _profileSiteUrl + webServiceExt;
                else
                    adminWebServiceUrl = _profileSiteUrl + "/" + webServiceExt;

                LogMessage("Initializing SPO web service " + adminWebServiceUrl, LogLevel.Information);

                SecureString securePassword = GetSecurePassword(_sPoAuthPasword);
                SharePointOnlineCredentials onlineCred = new SharePointOnlineCredentials(_sPoAuthUserName, securePassword);

                string authCookie = onlineCred.GetAuthenticationCookie(new Uri(_profileSiteUrl));

                CookieContainer authContainer = new CookieContainer();
                authContainer.SetCookies(new Uri(_profileSiteUrl), authCookie);

                // Setting up the user profile web service.
                _userProfileService = new UPSvc.UserProfileService();
                _userProfileService.Url = adminWebServiceUrl;

                // Assign previously created auth container to admin profile web service. 
                _userProfileService.CookieContainer = authContainer;
                return true;
            }
            catch (Exception ex)
            {
                LogMessage("Error initiating connection to profile web service in SPO " + ex.Message, LogLevel.Error);
                return false;

            }
            
        }

The SetSingleMVProfileProperty method sets a multivalued user profile property, such as SPS-Responsibility, by doing the following:

  • Splitting PropertyValue into a string array called arrs to store user profile property values. The string is split using the PROPERTYSEPERATOR configuration setting specified in App.Config.

  • Assigning the values of arrs to a ValueData array on the user profile service.

  • Creating a PropertyData array on the user profile service. The name of the user profile property and the ValueData array are passed to properties on the PropertyData object. This array has one element only because only one multivalued user profile property will be imported.

The data is written to the user profile service using ModifyUserPropertyByAccountName on the userprofileservice.asmx web service on the SharePoint Online admin center. The user running this code sample must be an Office 365 administrator.

static void SetSingleMVProfileProperty(string UserName, string PropertyName, string PropertyValue)
        {

            try
            {
                string[] arrs = PropertyValue.Split(ConfigurationManager.AppSettings["PROPERTYSEPERATOR"][0]);
                
               UPSvc.ValueData[] vd = new UPSvc.ValueData[arrs.Count()];
               
               for (int i=0;i<=arrs.Count()-1;i++)
               {
                    vd[i] = new UPSvc.ValueData();
                    vd[i].Value = arrs[i];
                }
               
                UPSvc.PropertyData[] data = new UPSvc.PropertyData[1];
                data[0] = new UPSvc.PropertyData();
                data[0].Name = PropertyName;
                data[0].IsValueChanged = true;
                data[0].Values = vd;
                               
                _userProfileService.ModifyUserPropertyByAccountName(string.Format(@"i:0#.f|membership|{0}", UserName), data);

            }
            catch (Exception ex)
            {
                LogMessage("Exception trying to update profile property " + PropertyName + " for user " + UserName + "\n" + ex.Message, LogLevel.Error);
            }

        }

The SetSingleValuedProperty method sets single-valued user profile properties, such as AboutMe. SetSingleValuedProperty implements the same technique as SetSingleMVProfileProperty, but uses a ValueData array with one element only.

static void SetSingleProfileProperty(string UserName, string PropertyName, string PropertyValue)
        {

            try
            {
                UPSvc.PropertyData[] data = new UPSvc.PropertyData[1];
                data[0] = new UPSvc.PropertyData();
                data[0].Name = PropertyName;
                data[0].IsValueChanged = true;
                data[0].Values = new UPSvc.ValueData[1];
                data[0].Values[0] = new UPSvc.ValueData();
                data[0].Values[0].Value = PropertyValue;
                _userProfileService.ModifyUserPropertyByAccountName(UserName, data);
            }
            catch (Exception ex)
            {
                LogMessage("Exception trying to update profile property " + PropertyName + " for user " + UserName + "\n" + ex.Message, LogLevel.Error);
            }

        }

34.4 Additional resources

<a name=“bk_addresources”> </a>

35 Deploying Development Office 365 Sites to Microsoft Azure

35.0.1 Summary

When developing any type a web application, most development is done locally using http://localhost. Some projects use local resources or a mix of local and remote resources. Taking these projects from local development environments involves a handful of tasks to perform like changing database connection strings, URLs, configurations, etc.

Web projects that leverage the Office 365 APIs are no different. These projects leverage Microsoft’s Azure AD service to authenticate the applications and obtain OAuth 2.0 access tokens. These tokens are used by the web applications to authenticate with the Office 365 APIs.

This page explains the steps involved in taking an Office 365 API development project and launching it to a working sample hosted entirely in Microsoft Azure using Office 365, Azure Active Directory & [Azure Websites](http://azure.microsoft.com/en-us/services/websites/.

Deploying an Office 365 API web application to Microsoft Azure from a local development environment requires three high-level steps to be performed as outlined in this page:

This page assumes that you have a local working ASP.NET application that uses the Office 365 APIs. For reference, it will use the O365-WebApp-SingleTenant project found in the OfficeDev account in GitHub.

36 Create and Configure an Azure Website

In this step you will create an Azure website that will be used to host the web application.

  1. Navigate to the Azure Management Portal and login using your Organization ID account.
  2. After logging in, using the navigation sidebar, select WEBSITES.
  3. On the websites page, click the NEW link in the footer found in the lower-left corner of the page.
  4. In the wizard that appears, select Quick Create, enter a name for the site in the URL field, select a Web Hosting Plan and Subscription.

The Quick Create settings: The URL field is set to o365api-01, Web Hosting Plan is set to Default1 (East US, Standard), Subscription is set to Azure MSDN (primary).

Make sure to keep a note of the name of the website you create as it will be needed later.

  1. Finally click the Create Website link to create the site.

Give Azure a few moments to create the site. After creating the site you can specify app settings through the web interface. This allows you to override any <appSettings> within the project’s web.config file through the web administration interface for the website without deploying your site codebase for simple web.config changes.

  1. Click the website that you just created within the Azure Management Portal.
  2. CLick the CONFIGURE link in the top navigation.
  3. Scroll down to the App Settings section and add three new entries:
  • ida:ClientID
  • ida:Password
  • ida:TenantID
  1. Copy the corresponding values from the working project’s web.config to these settings values in your Azure website as shown in the following figure:

WEBSITE_NODE_DEFAULT_VERSION is 0.10.32, ida:ClientID is 92b1e137-c36f-4bfe-9e1c-01ef546ce4a9, ida:Password is Bns06N18ZiyYfMcyU9qUfGnZbnkBiPZfUptLDsU6cml, ida:TenantId is partially redacted. The center numbers of the GUID are -45ee-8afc-.

  1. In the footer, click the SAVE button to save your changes.

At this point the Azure website is setup and configured to host the Office 365 API web project that you will deploy in a later step.

back to top

37 Configure the Azure AD Application

In this step you will modify the Azure AD application used in the development & testing of the Office 365 application.

  1. Navigate to the Azure Management Portal and login using your Organization ID account.
  2. After logging in, using the navigation sidebar, select ACTIVE DIRECTORY.
  3. On the active directory page, select the directory that is linked to your Office 365 tenant.
  4. Next, click the APPLICATIONS item in the top navigation.
  5. Within the Properties section, update the SIGN-ON URL to point to the default URL of the Azure Website you created. Take note to use the HTTPS endpoint that is provided with all Azure websites.

Name is set to O365-WebApp-SingleTenant.Office365App, Sign-on URL is set to https://o365api-01.azurewebsites.net

  1. Within the Single Sign-On section, update the App ID URI to use the domain for the Azure website (shown in the following figure).
  2. Next, update the REPLY URL so the only URL listed is the homepage of the Azure website:

App ID URI is https://o365api-01.azurewebsites.net/O365-WebApp-SingleTenant, Reply URL is https://o365api-01.azurewebsites.net/

  1. In the footer, click the SAVE button to save your changes.

At this point, the Azure AD application used by the Office 365 API web project has been configured to work with the new Azure website.

back to top

38 Configure the ASP.NET Project

In this step you will configure the ASP.NET project in your application to use the new Azure Website.

For the sample application used in the example for this guidance, no extra work is actually required. However the web application does contain the settings within the web.config file for the Azure AD application and Azure AD tenant used during development. Some developers may choose to use different Azure AD applications or even different Azure subscriptions for their development and production instances.

In a previous step outlined in this page, when you created the Azure website you set the add-in settings for the application that are typically found in the web.config. To ensure the web application receives these values from the Azure website configuration, it’s recommended you replace the values within the web.config with placeholder values instead.

  1. Open the project’s web.config file.
  2. Locate the add-in settings for the ida:ClientID, ida:Password and ida:TenantId.
  3. Replace the values of these settings with a placeholder value:

xml <add key="ida:TenantId" value="set-in-azure-website-config" /> <add key="ida:ClientID" value="set-in-azure-website-config" /> <add key="ida:Password" value="set-in-azure-website-config" />

  1. Save your changes.

At this point the web application, Azure website & application in Azure AD are all configured correctly and ready to be deployed.

back to top

39 Deploy the Office 365 API ASP.NET Web Application

In this step you will publish the Office 365 API web application to the Azure website. Once the site has been deployed you will test it to ensure everything works as desired.

This step assumes you have he Microsoft Azure SDK, version 2.0 or higher, installed.

39.1 Deploy the ASP.NET Web Application

  1. Open your Office 365 API web application in Visual Studio 2013.
  2. Within the Solution Explorer tool window, right-click the project and select Publish start the Publish Web wizard.
  3. On the Profile tab, select Microsoft Azure Website.

At this point you will be prompted to login to your Azure subscription using your Organization ID.

  1. After logging in, select the website that you created in a previous step from this page and click OK.

The Select Existing Website dialog shows Existing Websites set to o365api-01.

  1. On the Connection tab, click the Validate Connection button to ensure the connection profile was successfully downloaded and applied.

An arrow points to the Validate Connection button near the bottom of the dialog box, with a green check mark next to the button.

  1. Click the Publish button to publish the web application to the Azure website.

39.2 Test the ASP.NET Web Application

After publishing the web application to the Azure website, Visual Studio will open a browser and navigate to the site’s homepage.

By default this is the HTTP endpoint. Recall from the previous step when you configured the Azure AD application that you set it to only accept sign ons from the HTTPS endpoint. Before you use the application update the url to point to the HTTPS endpoint.

  1. In the browser, update the URL to go to the HTTPS homepage for the Azure website. In the example in this page, that is https://o365api-01.azurewebsites.net.
  2. Click the Sign In link in the header at the top-right of the page. This will redirect you to the Azure AD sign on page.

If you get an error at this point, it’s likely an issue with the three add-in settings you created for the Azure website. Go back and make sure the values are the correct values from the Azure AD tenant & application. You should see a URL that looks

  1. After successfully logging in, you will be redirected back to the homepage for the web application of the Azure website you created.

At this point you have successfully deployed your Office 365 API web application project to run in an Azure website.

back to top


39.2.2 Applies to

  • Office 365 Multi Tenant (MT)
  • Office 365 Dedicated (D)

39.2.3 Author

Andrew Connell - @andrewconnell

39.2.4 Version history

Version Date Comments
0.1 January 2, 2015 First draft

40 Office 365 development and SharePoint patterns and practices solution guidance

The Office 365 Developer and SharePoint Patterns and Practices (PnP) initiative provides samples and documentation to help you to implement typical customizations for Office 365 or for SharePoint (Online and on-premises) based on your functional requirements.

Applies to: Office 365 | SharePoint 2016 | SharePoint 2013 | SharePoint Online

40.1 In this section

Read this content If you want to…
Branding and site provisioning Use SharePoint Add-ins to provision and manage SharePoint site branding.
Customizing the “modern” experiences in SharePoint Online Customization options with SharePoint Online “modern” experiences.
Composite business add-ins Use composite business apps to integrate your SharePoint solutions with your business processes and technologies.
ECM solutions Perform common ECM tasks such as setting site policies, uploading files, synchronizing term groups, and more.
Localization solutions Localize your SharePoint site contents and UI text.
Search solutions Find out about the SharePoint search architecture, search APIs, and search add-ins.
Security and Performance Shows you how to improve the security and performance of your SharePoint sites with OAuth, support for Germany, China and US Government environments, cross-domain images, elevated privileges, and external sharing.
SharePoint Add-In Recipes Find a list of SharePoint add-in recipes.
Transform farm solutions to the SharePoint add-in model Convert your farm solutions to the SharePoint add-in model.
Sandbox solution transformation guidance Convert your sandbox solutions to add-in model or alternative solutions.
User Profile Solutions Work with SharePoint user profile data.
Deploying your SharePoint add-ins Shows how to deploy your SharePoint add-ins.
PnP remote provisioning Learn about remote provisioning for your Office 365 and SharePoint Online site collections using features of the add-in model.
PnP remote timer job framework Describes timer jobs which are background tasks that operate on your SharePoint sites.

40.2 Additional resources

<a name=“bk_addresources”> </a>

41 Personalize search results sample add-in for SharePoint

You can personalize SharePoint by filtering information that is shown to the user based on the value of a user profile property.

Applies to: Office 365 | SharePoint 2013 | SharePoint Online

The Search.PersonalizedResults code sample shows how you can personalize SharePoint by filtering information based on the value of a user profile property. Some examples of pesonalization include:

  • News articles or other content filtered by country or location.

  • Navigation links filtered based on the user’s role or organization.

  • Restaurants or retail outlet listings based on the location of your place of business.

This code sample uses a provider-hosted add-in to display search results to the user that include either all sites or only team sites that the user has access to. To do this, the sample:

  • Checks the value of the AboutMe user profile property.

  • Builds a search query filter string associated with the value of the AboutMe user profile property.

  • Runs the search query and displays the search query results.

41.1 Before you begin

<a name=“sectionSection0”> </a>

To get started, download the Search.PersonalizedResults sample add-in from the Office 365 Developer patterns and practices project on GitHub.

41.2 Using the Search.PersonalizedResults app

<a name=“sectionSection1”> </a>

When you run this code sample, a provider-hosted application appears, as shown in Figure 1.

Figure 1. Start page of the Search.PersonalizedResults app

Screenshot that shows the start page of the Search.PersonalizedResults app

This article describes the Perform personalized search of all site templates using profile data scenario. Choosing Perform Personalized Search returns filtered search results that contain team sites only, as shown in Figure 2. Notice that the Template column contains sites of type STS only.

Figure 2. Search results showing team sites only

Screenshot of the earch results showing team sites only

For handling personalization scenarios, you can change the search query by:

  • Reading and testing the value of a user profile property for that user. This code sample tests the About Me property for a value of AppTest.

  • Taking a specific course of action based on the value of the user profile property. For example, if the value of the About Me user profile property is AppTest, this code sample removes the team site filter and returns search results that contain all sites.

41.2.1 To enter AppTest in the About Me user profile property

  1. At the top of your Office 365 site, choose your profile picture, then choose About Me, as shown in Figure 3.

  2. On the About me page, choose edit your profile.

  3. In About me, enter AppTest.

  4. Choose Save all and close.

Figure 3. Navigating to a user’s profile page by choosing About me

Screenshot of the user profile page with About me highlighted.

Return to the Search.PersonalizedResults provider-hosted add-in and choose Perform Personalized Search again. The add-in changes the filter on the search query to show all sites instead of team sites only, as shown in Figure 4. The Template column now contains several different site template types.

Figure 4. Search results showing all sites

Screenshot of search results showing all sites

Choosing Perform Personalized Search calls the btnPersonalizedSearch_Click method in default.aspx.cs. btnPersonalizedSearch_Click performs the following actions:

  • Uses PeopleManager to get all user profile properties for the user running this add-in.

  • Retrieves and checks the value of the AboutMe user profile property. If the value of the AboutMe property is AppTest, the search query retrieves all sites using the query string contentclass:"STS_Site". If the value of the AboutMe property is not AppTest, the team site filter is appended to the query string ( WebTemplate=STS), and the search query retrieves team sites only.

  • Calls the ProcessQuery method to retrieve the search results based on the supplied query string. ProcessQuery also demonstrates how to specify a list of properties to return with the search results.

  • Calls the FormatResults method to format the search results into an HTML table.

Note The code in this article is provided as-is, without warranty of any kind, either express or implied, including any implied warranties of fitness for a particular purpose, merchantability, or non-infringement.

protected void btnPersonalizedSearch_Click(object sender, EventArgs e)
        {
            var spContext = SharePointContextProvider.Current.GetSharePointContext(Context);

            using (var clientContext = spContext.CreateUserClientContextForSPHost())
            {
                // Load user profile properties.
                PeopleManager peopleManager = new PeopleManager(clientContext);
                PersonProperties personProperties = peopleManager.GetMyProperties();
                clientContext.Load(personProperties);
                clientContext.ExecuteQuery();
                // Check the value of About Me. 
                string aboutMeValue = personProperties.UserProfileProperties["AboutMe"];
                string templateFilter = ResolveAdditionalFilter(aboutMeValue);
                // Build the query string.
                string query = "contentclass:\"STS_Site\" " + templateFilter;
                ClientResult<ResultTableCollection> results = ProcessQuery(clientContext, query);
                lblStatus2.Text = FormatResults(results);
            }
        }

private ClientResult<ResultTableCollection> ProcessQuery(ClientContext ctx, string keywordQueryValue)
        {
            KeywordQuery keywordQuery = new KeywordQuery(ctx);
            keywordQuery.QueryText = keywordQueryValue;
            keywordQuery.RowLimit = 500;
            keywordQuery.StartRow = 0;
            keywordQuery.SelectProperties.Add("Title");
            keywordQuery.SelectProperties.Add("SPSiteUrl");
            keywordQuery.SelectProperties.Add("Description");
            keywordQuery.SelectProperties.Add("WebTemplate");
            keywordQuery.SortList.Add("SPSiteUrl", Microsoft.SharePoint.Client.Search.Query.SortDirection.Ascending);
            SearchExecutor searchExec = new SearchExecutor(ctx);
            ClientResult<ResultTableCollection> results = searchExec.ExecuteQuery(keywordQuery);
            ctx.ExecuteQuery();
            return results;
        }

41.3 Additional resources

<a name=“bk_addresources”> </a>

42 Read or update user profile properties sample add-in for SharePoint

You can use a provider-hosted add-in to read or update SharePoint single and multivalued user profile properties.

Applies to: Office 365 | SharePoint 2013 | SharePoint Online

The UserProfile.Manipulation.CSOM sample shows you how to read and update user profile properties for a particular user. This sample uses a provider-hosted add-in to:

  • Read and display all user profile properties for a user.

  • Update a single-valued user profile property.

  • Update a multivalued user profile property.

Use this solution if you want to:

  • Read or write data to a user profile property for a user.

  • Use user profile property values to personalize SharePoint.

Note This code sample only runs on Office 365.

42.1 Before you begin

<a name=“sectionSection0”> </a>

To get started, download the UserProfile.Manipulation.CSOM sample add-in from the Office 365 Developer patterns and practices project on GitHub.

Before you run Scenario 1:

  1. At the top of your Office 365 site, choose your profile picture, and then choose About me, as shown in Figure 1.

  2. On the About me page, choose edit your profile.

  3. In About me, enter I work at Contoso.

  4. Choose Save all and close.

Before you run Scenario 3:

  1. At the top of your site, choose your profile picture, and then choose About me, as shown in Figure 1.

  2. On the About me page, choose edit your profile.

  3. On Edit Details, choose Details.

  4. In Skills, enter C#, JavaScript.

  5. Choose Save all and close.

Figure 1. Navigating to a user’s profile page by choosing About me

Screenshot of the user’s profile page with About me highlighted

42.2 Using the UserProfile.Manipulation.CSOM app

<a name=“sectionSection1”> </a>

When you run this sample, a provider-hosted add-in starts, as shown in Figure 2.

Figure 2. Start page of the UserProfile.Manipulation.CSOM app

Screenshot of the start page of the UserProfile.Manipulation.CSOM app

This code sample includes three scenarios.

Scenario Shows how to…
1 Read all user profile properties for the user running the app.
2 <p>Update a single-valued user profile property.</p><p>Note: This scenario is only supported in Office 365.</p>
3 <p>Update a multivalued user profile property.</p><p>Note: This scenario is only supported in Office 365.</p>

42.2.1 Scenario 1: Read all user profile properties

When you choose Run scenario 1, the add-in reads all user profile properties for the current user, and then displays the user profile data in Current user profile properties, as shown in Figure 3.

Figure 3. Current user’s profile properties data

Screenshot of the current user’s profile properties data

Choosing Run scenario 1 calls the btnScenario1_Click method in CodeSample1.aspx.cs to perform the following tasks:

  • Use PeopleManager to retrieve all the user profile properties for the current user.

  • Iterate over PersonProperties.UserProfileProperties to list the values of the user profile properties in a text box.

Note The code in this article is provided as-is, without warranty of any kind, either express or implied, including any implied warranties of fitness for a particular purpose, merchantability, or non-infringement.

protected void btnScenario1_Click(object sender, EventArgs e)
        {

            var spContext = SharePointContextProvider.Current.GetSharePointContext(Context);

            using (var clientContext = spContext.CreateUserClientContextForSPHost())
            {
                // Get the people manager instance and load current properties.
                PeopleManager peopleManager = new PeopleManager(clientContext);
                PersonProperties personProperties = peopleManager.GetMyProperties();
                clientContext.Load(personProperties);
                clientContext.ExecuteQuery();

                // Output user profile properties to a text box.
                txtProperties.Text = "";
                foreach (var item in personProperties.UserProfileProperties)
                {
                    txtProperties.Text += string.Format("{0} - {1}{2}", item.Key, item.Value, Environment.NewLine);
                }
            }
        }

42.2.2 Scenario 2: Update a single-valued user profile property

Scenario 2 shows how to update a single-valued user profile property. As shown in Figure 4, the current value of the About me user profile property for the user running this add-in is I work at Contoso. To update the value of the About me user profile property, in the About me new value box, enterI am a software engineer at Contoso and then choose Run scenario 2. The code updates the value of the About me property to I am a software engineer at Contoso. As shown in Figure 5, the add-in updates About me current value with the new value of the About me user profile property.

Figure 4. Scenario 2 start page

Screenshot of the start page for Scenario 2

Figure 5. Updated About me user profile property

Screenshot of the updated About Me user profile property

Choosing Run scenario 2 calls the btnScenario2_Click method in CodeSample2.aspx.cs to do the following:

  • Use PeopleManager to get the user profile properties of the current user.

  • Format the text entered by the user in HTML.

  • Update the value of the AboutMe user profile property by using SetSingleValueProfileProperty. SetSingleValueProfileProperty accepts three parameters:

    • The account name of the user whose user profile you’re updating.

    • The user profile property name ( AboutMe in this scenario).

    • The property value, in HTML format ( ** I am a software engineer at Contoso** in this scenario).

protected void btnScenario2_Click(object sender, EventArgs e)
        {
            var spContext = SharePointContextProvider.Current.GetSharePointContext(Context);

            using (var clientContext = spContext.CreateUserClientContextForSPHost())
            {
                // Get the people manager instance and initialize the account name.
                PeopleManager peopleManager = new PeopleManager(clientContext);
                PersonProperties personProperties = peopleManager.GetMyProperties();
                clientContext.Load(personProperties, p => p.AccountName);
                clientContext.ExecuteQuery();

                // Convert entry to HTML.
                string updatedValue = (txtAboutMe.Text).Replace(Environment.NewLine, "");

                // Update the AboutMe property for the user using account name from the user profile.
                peopleManager.SetSingleValueProfileProperty(personProperties.AccountName, "AboutMe", updatedValue);
                clientContext.ExecuteQuery();

            }
        }

Note If you use custom user profile properties, configure the property to be editable by users. The technique used in this scenario will work for custom user profile properties.

42.2.3 Scenario 3: Update a multivalued user profile property

Scenario 3 shows how to update a multivalued user profile property. Figure 6 shows the start page for Scenario 3. Skills current value shows the skills of the user running the app. The skills are read from the SPS-Skills user profile property for the user.

Figure 6. Scenario 3 start page

Screenshot of the start page for Scenario 3

To add new skills to the SPS-Skills user profile property from this add-in:

  1. Enter HTML5, and then choose Add Skill.

  2. Enter ASP.Net, and then choose Add Skill.

  3. Choose Run scenario 3.

  4. Verify that Skills current value shows the new list of skills for the user.

  5. Verify that the SPS-Skills user profile property for the user now shows the new list of skills.

Choosing Run scenario 3 calls btnScenario3_Click in CodeSample3.aspx.cs to do the following:

  • Use PeopleManager to get the user profile properties of the current user.

  • Read the list of skills shown in the list box.

  • Save the new skills to the SPS-Skills user profile property by using SetMultiValuedProfileProperty. SetMultiValuedProfileProperty accepts three parameters:

    • The account name of the user whose user profile is being updated.

    • The user profile property name, which is SPS-Skills.

    • The property values as a List of string objects.

  protected void btnScenario3_Click(object sender, EventArgs e)
        {
            var spContext = SharePointContextProvider.Current.GetSharePointContext(Context);

            using (var clientContext = spContext.CreateUserClientContextForSPHost())
            {
                // Get the people manager instance and initialize the account name.
                PeopleManager peopleManager = new PeopleManager(clientContext);
                PersonProperties personProperties = peopleManager.GetMyProperties();
                clientContext.Load(personProperties, p => p.AccountName);
                clientContext.ExecuteQuery();

                // Collect the user's skills from the list box in order to update the user's profile.
                List<string> skills = new List<string>();
                for (int i = 0; i < lstSkills.Items.Count; i++)
                {
                    skills.Add(lstSkills.Items[i].Value);
                }

                // Update the SPS-Skills property for the user using account name from the user's profile.
                peopleManager.SetMultiValuedProfileProperty(personProperties.AccountName, "SPS-Skills", skills);
                clientContext.ExecuteQuery();

                // Refresh the values.
                RefreshUIValues();
            }

        }

42.3 Additional resources

<a name=“bk_addresources”> </a>

43 Records management extensions sample app for SharePoint

As part of your Enterprise Content Management (ECM) strategy, you can enable and change in-place records management settings on your SharePoint sites and lists.

Applies to: Office 365 | SharePoint 2013 | SharePoint Online

The ECM.RecordsManagement sample shows you how to use a provider-hosted SharePoint app to control the in-place records management settings for a site or list.

Use this solution if you want to:

  • Configure in-place records management settings during your custom site provisioning process.

43.1 Before you begin

<a name=“sectionSection0”> </a>

To get started, download the ECM.RecordsManagement sample app from the Office 365 Developer patterns and practices project on GitHub.

Before you run this app:

  • Activate the In-Place Records Management feature on the site collection, as shown in Figure 1.

    Figure 1. Activating In-Place Records Management on your site collection

    Screenshot of the Site Collections Features page with the activated In-Place Record Management feature highlighted.

  • In site settings, verify that you see Record declaration settings in Site Collection Administration, as shown in Figure 2.

    Figure 2. Record declaration settings in Site Settings

    Screenshot of the Site Settings page with Record declaration settings highlighted.

43.2 Using the ECM.RecordsManagement sample app

<a name=“sectionSection1”> </a>

When you start the app, the start page displays the two scenarios that are available:

  • Enabling in-place records management for sites

  • Enabling in-place records management for lists

Figure 3. ECM.RecordsManagement app start page

Screenshot of the app start page, showing the two scenarios.

You can use scenario 1 to build a UI to control the records management settings on your site collection. The UI in this app is similar to the UI found in Records declaration settings in Site Settings (see Figure 2). You can also activate or deactivate the In-Place Records Management feature on your site collection.

You can use Scenario 2 to build a UI to control the records management settings on lists. The UI in this app is similar to the UI found in Records declaration settings in the library settings on your list.

Figure 4. Record declaration settings on a list

Screenshot of the Library Record Declaration Settings page.

43.2.1 Scenaro 1

Scenario 1 addresses in-place records management features and settings for sites. The app UI includes a Deactivate (or Activate) button, as shown in Figure 5. Choosing this button deactivates (or activates) the In-Place Records Management feature on the site collection.

Figure 5. Deactivate button for the In-Place Records Management feature

Screenshot that shows the deactivate or activate button for in-place records management.

The following code activates or deactivates the In-Place Records Management feature on the site collection. The DisableInPlaceRecordsManagementFeature and EnableSiteForInPlaceRecordsManagement methods are part of the AppModelExtensions\RecordsManagementExtensions.cs file in the OfficeDevPnP.Core.

Note The code in this article is provided as-is, without warranty of any kind, either express or implied, including any implied warranties of fitness for a particular purpose, merchantability, or non-infringement.

protected void btnToggleIPRStatus_Click(object sender, EventArgs e)
        {
            if (cc.Site.IsInPlaceRecordsManagementActive())
            {
                cc.Site.DisableInPlaceRecordsManagementFeature();
                IPRStatusUpdate(false);
            }
            else
            {
                cc.Site.EnableSiteForInPlaceRecordsManagement();
                IPRStatusUpdate(true);
            }
        }

OfficeDevPnP.Core includes extension methods to get and set all site-scoped in-place records management settings. The following code from the EnableSiteForInPlaceRecordsManagement method shows how to use these extension methods to set restrictions, and specify who can declare or undeclare records on your site.

public static void EnableSiteForInPlaceRecordsManagement(this Site site)
        {
            // Activate the In-Place Records Management feature if not yet enabled.
            if (!site.IsFeatureActive(new Guid(INPLACE_RECORDS_MANAGEMENT_FEATURE_ID)))
            {
                // Note: this also sets the ECM_SITE_RECORD_RESTRICTIONS value to "BlockDelete, BlockEdit".
                site.ActivateInPlaceRecordsManagementFeature();
            }

            // Enable in-place records management in all locations.
            site.SetManualRecordDeclarationInAllLocations(true);

            // Set restrictions to default values after enablement (this is also done at feature activation).
            EcmSiteRecordRestrictions restrictions = EcmSiteRecordRestrictions.BlockDelete | EcmSiteRecordRestrictions.BlockEdit;
            site.SetRecordRestrictions(restrictions);

            // Set record declaration to default value.
            site.SetRecordDeclarationBy(EcmRecordDeclarationBy.AllListContributors);

            // Set record undeclaration to default value.
            site.SetRecordUnDeclarationBy(EcmRecordDeclarationBy.OnlyAdmins);

        }

When the user changes their in-place records management settings and chooses the Save changes button, the following code in the btnSaveSiteScopedIPRSettings_Click method runs.

protected void btnSaveSiteScopedIPRSettings_Click(object sender, EventArgs e)
        {
            EcmSiteRecordRestrictions restrictions = (EcmSiteRecordRestrictions)Convert.ToInt32(rdRestrictions.SelectedValue);
            cc.Site.SetRecordRestrictions(restrictions);
            cc.Site.SetManualRecordDeclarationInAllLocations(Convert.ToBoolean(rdAvailability.SelectedValue));
            EcmRecordDeclarationBy declareBy = (EcmRecordDeclarationBy)Convert.ToInt32(rdDeclarationBy.SelectedValue);
            cc.Site.SetRecordDeclarationBy(declareBy);
            EcmRecordDeclarationBy unDeclareBy = (EcmRecordDeclarationBy)Convert.ToInt32(rdUndeclarationBy.SelectedValue);
            cc.Site.SetRecordUnDeclarationBy(unDeclareBy);
        }

In the previous code, a call is made to the SetRecordRestrictions method in RecordsManagementExtensions.cs. The SetRecordRestrictions method in the next example shows how to set restrictions on the records.

public static void SetRecordRestrictions(this Site site, EcmSiteRecordRestrictions restrictions)
        {
            string restrictionsProperty = "";

            if (restrictions.Has(EcmSiteRecordRestrictions.None))
            {
                restrictionsProperty = EcmSiteRecordRestrictions.None.ToString();
            }
            else if (restrictions.Has(EcmSiteRecordRestrictions.BlockEdit))
            {
                // BlockEdit is always used in conjunction with BlockDelete.
                restrictionsProperty = EcmSiteRecordRestrictions.BlockDelete.ToString() + ", " + EcmSiteRecordRestrictions.BlockEdit.ToString();
            }
            else if (restrictions.Has(EcmSiteRecordRestrictions.BlockDelete))
            {
                restrictionsProperty = EcmSiteRecordRestrictions.BlockDelete.ToString();
            }

            // Set property bag entry.
            site.RootWeb.SetPropertyBagValue(ECM_SITE_RECORD_RESTRICTIONS, restrictionsProperty);
        }

43.2.2 Scenario 2

Scenario 2 shows how to interact with in-place records management settings for lists. When the app installs, it creates a document library called IPRTest. When you use this app to change and save the in-place records management settings, the changes are applied to IPRTest.

Note To use in-place records management settings on a list, you must activate the In-place Records Management feature on your site collection, as shown in Figure 1 earlier in this article.

The following code in Default.aspx.cs runs when a user chooses the Save Changes button.

protected void btnSaveListScopedIPRSettings_Click(object sender, EventArgs e)
        {
            List ipr = cc.Web.GetListByTitle(IPR_LIBRARY);
            EcmListManualRecordDeclaration listManual = (EcmListManualRecordDeclaration)Convert.ToInt32(rdListAvailability.SelectedValue);
            ipr.SetListManualRecordDeclaration(listManual);
            ipr.SetListAutoRecordDeclaration(chbAutoDeclare.Checked);

            // Refresh the settings as AutoDeclare changes the manual settings.
            if (ipr.IsListRecordSettingDefined())
            {
                rdListAvailability.SelectedValue = Convert.ToString((int)ipr.GetListManualRecordDeclaration());
                chbAutoDeclare.Checked = ipr.GetListAutoRecordDeclaration();
                rdListAvailability.Enabled = !chbAutoDeclare.Checked;
            }

        }

The code calls the following two methods in the RecordsManagementExtensions.cs file of OfficeDevPnP.Core:

  • SetListManualRecordDeclaration - Defines the manual records declaration setting for this list.

  • SetListAutoRecordDeclaration - Automatically declares items added to this list as a record. If records declaration is set to automatic on this list, the manual records declaration settings on the list no longer apply. Event receivers are added to the list to start specific records management actions when events occur.

public static void SetListManualRecordDeclaration(this List list, EcmListManualRecordDeclaration settings)
        {
            if (settings == EcmListManualRecordDeclaration.UseSiteCollectionDefaults)
            {
                // If you set list record declaration back to the default values, you also need to 
                // turn off auto record declaration. Other property bag values are left as is; when 
                // settings are changed again these properties are also again usable.
                if (list.PropertyBagContainsKey(ECM_AUTO_DECLARE_RECORDS))
                {
                    list.SetListAutoRecordDeclaration(false);
                }
                // Set the property that dictates custom list record settings to false.
                list.SetPropertyBagValue(ECM_IPR_LIST_USE_LIST_SPECIFIC, false.ToString());
            }
            else if (settings == EcmListManualRecordDeclaration.AlwaysAllowManualDeclaration)
            {
                list.SetPropertyBagValue(ECM_ALLOW_MANUAL_DECLARATION, true.ToString());
                // Set the property that dictates custom list record settings to true.
                list.SetPropertyBagValue(ECM_IPR_LIST_USE_LIST_SPECIFIC, true.ToString());
            } 
            else if (settings == EcmListManualRecordDeclaration.NeverAllowManualDeclaration)
            {
                list.SetPropertyBagValue(ECM_ALLOW_MANUAL_DECLARATION, false.ToString());
                // Set the property that dictates custom list record settings to true.
                list.SetPropertyBagValue(ECM_IPR_LIST_USE_LIST_SPECIFIC, true.ToString());
            }
            else
            {
                throw new ArgumentOutOfRangeException("settings");
            }
        }

public static void SetListAutoRecordDeclaration(this List list, bool autoDeclareRecords)
        {
            // Determine the SharePoint version based on the loaded CSOM library.
            Assembly asm = Assembly.GetAssembly(typeof(Microsoft.SharePoint.Client.Site));
            int sharePointVersion = asm.GetName().Version.Major;

            if (autoDeclareRecords)
            {
                // Set the property that dictates custom list record settings to true.
                list.SetPropertyBagValue(ECM_IPR_LIST_USE_LIST_SPECIFIC, true.ToString());
                // Prevent manual declaration.
                list.SetPropertyBagValue(ECM_ALLOW_MANUAL_DECLARATION, false.ToString());

                // Hook up the needed event handlers.
                list.Context.Load(list.EventReceivers);
                list.Context.ExecuteQuery();

                List<EventReceiverDefinition> currentEventReceivers = new List<EventReceiverDefinition>(list.EventReceivers.Count);
                currentEventReceivers.AddRange(list.EventReceivers);

                // Track changes to see if a list.Update is needed.
                bool eventReceiverAdded = false;
                
                // ItemUpdating receiver.
                EventReceiverDefinitionCreationInformation newEventReceiver = CreateECMRecordEventReceiverDefinition(EventReceiverType.ItemUpdating, 1000, sharePointVersion);
                if (!ContainsECMRecordEventReceiver(newEventReceiver, currentEventReceivers))
                {
                    list.EventReceivers.Add(newEventReceiver);
                    eventReceiverAdded = true;
                }
                // ItemDeleting receiver.
                newEventReceiver = CreateECMRecordEventReceiverDefinition(EventReceiverType.ItemDeleting, 1000, sharePointVersion);
                if (!ContainsECMRecordEventReceiver(newEventReceiver, currentEventReceivers))
                {
                    list.EventReceivers.Add(newEventReceiver);
                    eventReceiverAdded = true;
                }
                // ItemFileMoving receiver.
                newEventReceiver = CreateECMRecordEventReceiverDefinition(EventReceiverType.ItemFileMoving, 1000, sharePointVersion);
                if (!ContainsECMRecordEventReceiver(newEventReceiver, currentEventReceivers))
                {
                    list.EventReceivers.Add(newEventReceiver);
                    eventReceiverAdded = true;
                }
                // ItemAdded receiver.
                newEventReceiver = CreateECMRecordEventReceiverDefinition(EventReceiverType.ItemAdded, 1005, sharePointVersion);
                if (!ContainsECMRecordEventReceiver(newEventReceiver, currentEventReceivers))
                {
                    list.EventReceivers.Add(newEventReceiver);
                    eventReceiverAdded = true;
                }
                // ItemUpdated receiver.
                newEventReceiver = CreateECMRecordEventReceiverDefinition(EventReceiverType.ItemUpdated, 1007, sharePointVersion);
                if (!ContainsECMRecordEventReceiver(newEventReceiver, currentEventReceivers))
                {
                    list.EventReceivers.Add(newEventReceiver);
                    eventReceiverAdded = true;
                }
                // ItemCheckedIn receiver.
                newEventReceiver = CreateECMRecordEventReceiverDefinition(EventReceiverType.ItemCheckedIn, 1006, sharePointVersion);
                if (!ContainsECMRecordEventReceiver(newEventReceiver, currentEventReceivers))
                {
                    list.EventReceivers.Add(newEventReceiver);
                    eventReceiverAdded = true;
                }
                                
                if (eventReceiverAdded)
                {
                    list.Update();
                    list.Context.ExecuteQuery();
                }

                // Set the property that dictates the autodeclaration.
                list.SetPropertyBagValue(ECM_AUTO_DECLARE_RECORDS, autoDeclareRecords.ToString());
            }
            else
            {
                // Set the property that dictates the autodeclaration.
                list.SetPropertyBagValue(ECM_AUTO_DECLARE_RECORDS, autoDeclareRecords.ToString());
                //Note: Existing list event handlers will just stay as they are, no need to remove them.
            }
        }

43.3 Additional resources

<a name=“bk_addresources”> </a>

44 Sandbox solution transformation guidance - Event receivers

This article will help you to identify options and strategies on replacing existing event receivers from your sandbox solutions.

Applies to: Add-ins for SharePoint | SharePoint 2013 | SharePoint 2016 | SharePoint Online

Code-based sandbox solutions were deprecated back in 2014 and SharePoint online has started the process to completely remove this capability. Code-based sandbox solutions are also deprecated in SharePoint 2013 and in SharePoint 2016.

44.1 Summary

The approach you take to handle events in SharePoint is slightly different in the SharePoint add-in model than it was with Full Trust Code or in coded-sandbox solutions. In typical previous solutions, event receivers were created using the SharePoint Server Side Object Model and deployed via solutions packages, which executes the code on the SharePoint servers. In the SharePoint add-in model; however, the event receiver implementation executes on the web server that is hosting the event receiver and these are called Remote Event Receivers (RER). Event receivers can in many cases be replaced with a remote event receiver implementation. This article describes various options and design considerations.

44.2 Options for replacing Event Receivers

<a name=“sectionSection2”> </a>

Approach Additional Information
Remote Event Receiver </p><lu><li>Use remote event receivers in SharePoint</li><li>How to use remote event receivers for your SharePoint add-ins</li><li>Event receivers and list event receivers in the SharePoint add-in model</li></lu><li>Auto tagging sample add-in for SharePoint</li><li>Handle events in SharePoint Add-ins</li></lu></p>
WebHooks <p>WebHooks for SharePoint are still under development and will be available for preview soon.<lu><li>Introducing SharePoint WebHooks</li></p>
Remote Timer Job to monitor changes <p>Use the ChangeQuery object to monitor a site or list for modifications. This pattern is an alternative to Remote Event Receivers<lu><li>SharePoint List Item Change Monitor</li><li>Remote Timer Job Pattern</p>

44.3 Design Considerations

44.3.1 Remote Event Receivers

  • Requires hosting infrastructure
  • Hosting infrastructure must be highly available
  • The service endpoint that hosts the remote event receiver must be configured for anonymous authentication
  • Requires a trusted 3rd Party certificate if you are using SharePoint Online
  • Not intended for long running operations
  • Remote Event Receivers that are attached out-side the context of add-in, attached using a console application or PowerShell; will not receive a SharePoint context token when invoked and you must differ to app-only permissions or use the SharePointOnlineCredentials class
  • There is no retry mechanism

44.3.2 WebHooks

  • Requires hosting infrastructure
  • Hosting infrastructure must be highly available
  • Does not support synchronous events
  • Process changes after the event has occurred
  • Public preview available in summer 2016 for SharePoint Online
  • Not available in SharePoint on-premises builds at this time.

44.3.3 Remote Timer Job

  • Requires hosting infrastructure
  • Process changes after the event has occurred
  • Uses a polling mechanism to process changes

44.4 Removing your sandbox code from your site

<a name=“sectionSection3”></a>
When you deactivate your existing sandbox solution from your sites, any assets or files deployed using declarative options will not be removed however, the features in the sandbox solution will automatically be deactivated and the event receiver(s) will be removed.

44.5 Additional Resources

<a name=“bk_addresources”> </a>

45 Sandbox solution transformation guidance - Feature receivers

Transform or convert your code-based sandbox solutions to the SharePoint add-in model. Learn about the options and strategies of converting existing functionality to SharePoint add-in model or alternative solutions.

Applies to: Add-ins for SharePoint | SharePoint 2013 | SharePoint 2016 | SharePoint Online

45.1 Summary

Feature receivers are typically used to apply different kind of configurations or settings to SharePoint sites when feature is activated or when site is created (if feature associated to site template / web template). Feature receivers have been deployed using Sandbox solutions in the SharePoint Online, but since code-based customizations cannot no logner be used, alternative design will need to be taken.

The approach you take to handle feature receivers in SharePoint is slightly different in the SharePoint add-in model than it was with Full Trust Code or in coded-sandbox solutions. You will need to re-design the solution in away that you use remote APIs (CSOM/REST) to apply the needed configurations to your sites, which previously existed in the feature receiver(s).

45.2 Options for replacing Feature Receivers

<a name=“sectionSection2”> </a>

Approach Additional Information
PowerShell based customizations <p>You use PowerShell scripts to provision new site collections (and potentially sub sites) where needed customizations are applied using remote APIs. Typically this is done by using CSOM/REST directly in the PowerShell scripts or by using PnP PowerShell commands, which provides easy way to modify sites and content remotely.</p><p><lu><li>PnP Provisioning Engine</li><li>PnP PowerShell - Getting started with latest updates</li></lu></p>
Code based customizations <p>You can apply the needed customizations also using managed code with remote APIs. This means that you will either apply them as part of the administrative operation when the site is created or you apply customizations to SharePoint, which will hook in your code to be part of the UI elements, like by overiding the sub site creation logic, so that you can associate customizations as part of the provisioning logic.</p><p><lu><li>Remote provisioning pattern for sub site creation</li><li>PnP CSOM Core Component</li></lu></p>

45.3 Design Considerations

45.3.1 PowerShell based provisioning

This model works great if your site provisioning model is based on administrative actions.

  • Does require script to be executed which applies needed customizations to created sites
  • Can be combined to site creation process, if performed by as administrative operation
  • Does not require hosting infrastructure
  • No way to combine automatically as part of the sub site creation process

45.3.2 Code based provisioning

  • Can require hosting infrastructure, if combined with end user operations
  • You can use managed code which is executed anywhere using CSOM/REST APIs for the needed operations
  • Can be used to integrate to SharePoint by overriding sub site creation link
  • You can automate site collection and subsite provisioning using remote APIs

If you want to provide automatic way to apply needed remote code as part of the sub site creation logic, you’ll need to override sub site link using user custom actions. This option is only availale in sites, which are using classic mode around libraries and lists.

45.4 Reference approaches

45.4.1 Applying needed customizations to sites using PowerShell

Here’s a simple script which is using PnP PowerShell to upload a theme color file from computer to SharePoint Online and activates that in SharePoint site.

This sample is using PnP PowerShell, which provides more than 150 additional PowerShell cmdlets targeted for site configuration and asset management.


Connect-SPOnline –Url https://yoursite.sharepoint.com/ –Credentials (Get-Credential)
Add-SPOFile -Path c:\temp\company.spcolor -Folder /_catalogs/theme/15/
Set-SPOTheme -ColorPaletteUrl /_catalogs/theme/15/company.spcolor

45.4.2 Applying needed customizations to sites using code

Here’s a simple code piece which uses SharePoint Online CSOM to activate custom theme by first uploading the assets to SharePoint site and then activating custom theme.

This sample is using PnP CSOM Core Component, which extends the native out of the box operations by introducing additional set of extension methods for common operations. You can perform similar operations also by using native CSOM, but code would be significantly more complex.


// Upload assets to theme folder
clientContext.Site.RootWeb.UploadThemeFile(
        HostingEnvironment.MapPath(string.Format("~/{0}", "Resources/Themes/SPC/SPCTheme.spcolor")));
clientContext.Site.RootWeb.UploadThemeFile(
        HostingEnvironment.MapPath(string.Format("~/{0}", "Resources/Themes/SPC/SPCbg.jpg")));

Web web = clientContext.Web;
// loading RootWeb.ServerRelativeUrl property;
clientContext.Load(clientContext.Site, w => w.RootWeb.ServerRelativeUrl); 
clientContext.ExecuteQuery();
// Let's first upload the contoso theme to host web, if it does not exist there
web.CreateComposedLookByUrl("Contoso",
                clientContext.Site.RootWeb.ServerRelativeUrl + "/_catalogs/theme/15/SPCTheme.spcolor",
                null,
                clientContext.Site.RootWeb.ServerRelativeUrl + "/_catalogs/theme/15/SPCbg.jpg",
                string.Empty);

// Setting the Contoos theme to host web
web.SetComposedLookByUrl("Green");

When you are using code based approaches, we recommend using PnP Provisioning Engine for the template management, which will dramatically simplify the development efforts.

45.5 Removing sandbox solution containing feature receiver code from your site

<a name=“sectionSection3”> </a>
If your sandbox solution contains feature receivers logic for feature deactivation and it’s important that those are being executed, before code-based support is completely removed, you should ensure that these kind of features are deactivated before the code-based support is disabled from SharePoint Online. You can uninstall sandbox solutions from SharePoint Online after the code-based support is disabled, but your faeture receiver code would not be executed even though features are getting moved from the site.

When you deactivate your existing sandbox solution from your sites, any assets or files deployed using declarative options will not be removed however, the feature will automatically be deactivated. Execution of the code in the feature receiver, is dependent on the timing when the sandbox solution is deactivated.

45.6 Additional Resources

<a name=“bk_addresources”> </a>

46 Sandbox solution transformation guidance - InfoPath

When you’re using InfoPath forms with code-behind then these InfoPath forms do depend on code-based sandbox solutions for executing the code-behind. This article will help you to either fix or transform your InfoPath forms so that there’s no sandbox solution dependency anymore.

Applies to: InfoPath forms for SharePoint Online | SharePoint 2013 | SharePoint 2016

Code-based sandbox solutions were deprecated back in 2014 and SharePoint online has started the process to completely remove this capability. Code-based sandbox solutions are also deprecated in SharePoint 2013 and in SharePoint 2016.

46.1 Analyze and if possible fix your InfoPath forms

<a name=“sectionSection1”> </a>

In this section we show you an analysis and fixing model that you can apply to your InfoPath forms. Depending on your form you can simply fix the form and redeploy it or you need to move away from InfoPath and use an alternative approach to realize the needed functionality. However before taking those actions it’s important to assess the business need for your form: we often see a lot of old forms which are not business relevant anymore and in those cases it’s easier to simply drop the form.

46.1.1 How do I know that I’ve InfoPath forms with code behind?

The recommended option is to use the SharePoint SandBox Solution scanner tool as the report from this tool will indicate if the sandbox solution is coming from an InfoPath file. Additionally the tool will also tell you if the used assembly in the solution is “useless” as described later on in this article.

46.1.2 Are my forms still relevant?

Before diving into the remediation/transformation work it’s important to ask the question: is this form still critical for my business. If so then please continue to the next chapter, if not you need to think about the data created using this form. Typically the data was created as InfoPath XML files which live in a SharePoint list. If you remove the form you’ll not be able to visualize the data anymore: sometimes that’s fine since form and data are not relevant anymore but in case you want to ensure access to the data you can convert the InfoPath XML files to SharePoint list (items) data. The PnP-Transformation repository contains a sample showing how you can achieve this.

46.1.3 Downloading the InfoPath form (XSN file) for inspection

In the section step you’ve confirmed that you have InfoPath forms that require work, in this section you’ll learn how to download these forms. InfoPath forms with code behind are either deployed as “Form Library” or as “Site Content Type”. Depending on the chosen publishing model you can download the forms as follows.

46.1.3.1 Download “Form Library” deployed InfoPath forms

In this case the XSN file is inside the Forms folder of the form library to which the InfoPath form was deployed. To know the form library you can take a look at the WSP package name as if follows this convention: “InfoPath Form_LibName_id“. So once you have the library you need to download the template.xsn file from the Forms folder of the library. You can do so by constructing an url like this: library url + “Forms/template.xsn” like shown in this sample https://contoso.sharepoint.com/sites/infopath1/IHaveCodeBehind/Forms/template.xsn and using the browser to download the file.

46.1.3.2 Download “Site Content Type” deployed InfoPath forms

InfoPath forms deployed as content type do have their XSN file stored in a form library which was connected to the content type as form deployment time. Like in the previous section you can obtain the library name from the WSP package name. What’s different this time is that the form is actually stored as a file in the found library, so you can simply download it from the form library.

46.1.4 Fixing your InfoPath forms

Previous sections have shown and given you the InfoPath forms with code behind but do they really contain useful code behind? What we see is that there’s quite a lot of forms for which the form author accidentally clicked on the Code Editor button in the Developer ribbon of InfoPath:

InfoPath code behind

Once you’ve done this you’ll have code behind…but this code behind is not doing anything and by removing it you can convert your InfoPath form with code behind to a regular InfoPath form which has no code behind and as such no dependency on sandbox solutions!

46.1.4.1 How do I know the code behind is “useless”?

The SharePoint Sandbox Solution scanner will tell you if your InfoPath has useless code, but if you want to learn more then continue reading. You might wonder how to distinguish between useless and needed code behind as you can only fix the first category. If you still have the original deployed form (so not the one you’ve downloaded in previous steps) you can simply have a peek at the code. The default empty code is shown below and if you’ve similar code then this form can be fixed by dropping the code:

using Microsoft.Office.InfoPath;
using System;
using System.Xml;
using System.Xml.XPath;

namespace Form1
{
    public partial class FormCode
    {
        // Member variables are not supported in browser-enabled forms.
        // Instead, write and read these values from the FormState
        // dictionary using code such as the following:
        //
        // private object _memberVariable
        // {
        //     get
        //     {
        //         return FormState["_memberVariable"];
        //     }
        //     set
        //     {
        //         FormState["_memberVariable"] = value;
        //     }
        // }

        // NOTE: The following procedure is required by Microsoft InfoPath.
        // It can be modified using Microsoft InfoPath.
        public void InternalStartup()
        {
        }
    }
}

In case you only have the XSN file which you’ve downloaded in the previous step you can rename your XSN file to a cab file (e.g. template.cab), extract the assembly and use .Net reflection tools (like the open source ILSpy) to inspect the code. A typical view of useless code behind looks like this in ILSpy:

Useless code as seen in ILSpy

46.1.4.2 Dropping code behind from InfoPath forms to fix them

If you’ve confirmed your code behind is useless you can easily drop it by:

  • Opening up the form in InfoPath designer (right-click - Design)
  • Go to Form Options via File - Info
  • Select the Programming category and click on Remove Code
  • Publish the form again via File - Info - Quick Publish
  • Deactivate the linked sandbox solution via Site Settings - Solutions
  • Confirm the form works as expected
  • Delete the sandbox solution

If you don’t have access to the InfoPath XSN file and source code anymore you can still fix these forms by simply deactivating the sandbox solutions that have “useless” code only. Only do this for the ones mentioned in the sandbox solution report output with IsEmptyInfoPathAssembly = true.

46.2 Migrate your InfoPath forms

<a name=“sectionSection2”> </a>
If the guidance in the previous chapter was not applicable for your InfoPath form it essentially means your form is still business relevant and contains code behind that you cannot drop. If that’s the case the typical solution is moving away from InfoPath which can be done in various ways:

Azure PowerApps and Microsoft Flow

  • Build a SharePoint Add-In that leverages remote API’s to read/write SharePoint data

46.2.1 Building SharePoint Add-In’s to replace your InfoPath forms

When you opt to use regualar SharePoint Add-In’s to replace your InfoPath forms you do have several options. Below are three options we’ve worked out in more detail, but as said you can perfectly use variations of these. The three options we would like to dive into are:

Knockout sample

To better help you with converting your InfoPath form we’ve listed 11 common InfoPath coding patterns and show you how you can implement those patterns using the above 3 mentioned SharePoint Add-In options. To do so we’ve first developed a reference InfoPath form which uses the most common InfoPath coding patterns and then we’ve migrated that form to 3 SharePoint Add-In flavors. Below links show these common patterns:

46.2.2 Migrating your InfoPath data

Once you’ve moved over your InfoPath form to a new solution you might also want to migrate your data from InfoPath XML to regular SharePoint list data or to the data layer of your choice. Since InfoPath files are XML files it’s fairly easy to read and transform those. The PnP-Transformation repository contains a sample showing how you can achieve this.

46.2.3 Code-based operations are disabled and now my existing forms don’t open anymore

As soon as code based operations are disabled it means that no code can run anymore in the sandbox. If you’ve forms that execute code it also means that opening of existing forms will not work anymore. Below steps will help you handle this:

  • If you’ve migrated your InfoPath form to a new solution then you’ve most likely already converted your data and as such you’re good
  • If you opted to keep the form as is (e.g. since it’s not business critical anymore) but you still want to open the existing forms then you can take one of the following steps:
    • Remove the code behind from your form and republish it (see the Dropping code behind from InfoPath forms to fix them section above)
    • Use InfoPath Client to open the forms
    • Migrate the form data to plain SharePoint list data (see the Migrating your InfoPath data section above)

47 Sandbox solution transformation guidance - Web Parts

Transform or convert your code-based sandbox solutions to the SharePoint add-in model. Learn about the options and strategies of converting existing functionality to SharePoint add-in model or alternative solutions.

Applies to: Add-ins for SharePoint | SharePoint 2013 | SharePoint 2016 | SharePoint Online

47.1 Summary

One of the reasons many developers have leveraged code-based sandbox solutions is a desire to utilize visual web parts. This provides a great way to separate code from layout as well as
utilize the ASP.net controls. You can of course continue to use visual web parts in a provider hosted add-in via client web parts. This is a great approach and provides a direct migration path for many applications.

Another method is to re-write the web part as a client side solution. This will involve redesigning the solution to use JavaScript, HTML fragments, and one or more supporting frameworks. While this is net-new work, it has the added benefit of setting up your solution to easily integrate into the upcoming SharePoint Framework. This is a great choice for simple display or data entry web parts and can scale up to full page client applications.

47.2 Options for replacing Web Parts

<a name=“sectionSection2”> </a>

Approach Additional Information
Provider Hosted Add-In Client Webpart <ul><li>Get started creating provider-hosted SharePoint Add-ins</li><li>Create add-in parts to install with your SharePoint Add-in</li><li>Client Web Part Definition Schema</li><li>Set up an on-premises development environment for SharePoint Add-ins</li></ul>
Client Side Solution <ul><li>Simple React Form Sample</li><li>JavaScript Embedding Samples *</li><li>Patterns and Practices JS Core</li></ul>

47.3 Design Considerations

47.3.1 Provider Hosted Add-In

<ul>
<li>Requires hosting infrastructure</li>
<li>Hosting infrastructure must be highly available</li>
<li>Client part is displayed in an iframe limiting communication with the rest of the page</li>
<li>Must use remote APIs either via CSOM or REST</li>
</ul>

47.3.2 Client Side Solution

<ul>
<li>
<a name=“actionsupportnote”></a>
Please note that the ability to embed JavaScript in the prescribed way (through a UserCustomAction) does not work currently outside of the classic experience. For these cases you can link to the files using a script editor web part.</li>
<li>Cannot elevate permissions, instead use a micro-service with add-in only permissions</li>
<li>Limited by permissions of current user</li>
</ul>

47.4 Removing your sandbox code from your site

<a name=“sectionSection3”> </a>
When you deactivate your existing sandbox solution from your sites, any assets or files deployed using declarative options will not be removed however, the features in the sandbox solution will automatically be deactivated and the event receiver will be removed.

47.5 Additional Resources

<a name=“bk_addresources”> </a>

48 Sandbox solution transformation guidance

Transform or convert your code-based sandbox solutions to the SharePoint add-in model. Learn about the options and strategies on converting existing code-based functionalities to SharePoint add-in model or alternative solutions.

Applies to: add-ins for SharePoint | SharePoint 2013 | SharePoint Online

Code-based sandbox solutions were deprecated back in 2014 and SharePoint online has started the process to completely remove this capability. Code-based sandbox solutions are also deprecated in SharePoint 2013 and in SharePoint 2016.

Transforming your sandbox solutions to the SharePoint add-in model involves analyzing your existing extensions, designing and developing your new add-in(s) for SharePoint, and then testing and deploying your add-in in your production environment.

48.1 What is a code-based sandbox solution in SharePoint

<a name=“sectionSection0”> </a>
Sandbox solutions are customization packages, which can be used to deploy customizations to SharePoint in site collection level. If sandbox solution contains code, it has been executed in special isolated process with limited set of APIs to access SharePoint services and content.

There are two types of Sandbox solutions

  • Code-based sandbox solutions, which contain a custom assembly in the package
  • Declarative sandbox solutions, which only contain xml based configurations and related assets

Declarative (xml based) sandbox solutions can be further divided to following types based on their use case.

  • Site template – Generated using the “Save site as a template” functionality from existing sites
  • Design package – Generated using Design Manager from publishing site
  • Custom sandbox solutions - Created in Visual Studio for example for branding assets and do not contain assemblies

Code-based sandbox solutions can be further divided to following types based on the use cases.

  • Declarative sandbox solution with empty assembly
  • Sandbox solution containing InfoPath form with code
  • Code-based sandbox solutions with customizations like web parts, event receivers and/or feature receivers
  • Sandbox solutions with custom workflow action

When you are planning to move away from the sandbox solutions, you should be evaluating the functional and business requirements of specific solution and conclude the future design direction based on those.

48.2 Steps to perform transformation

<a name=“sectionSection1”> </a>

When you transform your sandbox solutions to the SharePoint add-in model, you want to ensure that the impact on your users is minimal. Carefully analyze your current sandbox solutions, and then design your new add-in for SharePoint to meet the needs of your organization. We recommend the following process to ensure a successful transformation.

  1. Readiness. Learn about:

  2. Solution assessment. Analyze the functional and business requirements by:

    • Identifying deployed sandbox solutions in your current environment for which you either can use the SharePoint Sandbox Solution scanner (video) or use specific sandbox solution inventory script. The first one is a tool offering a lot of options and a detailed inventory, the latter is a PowerShell script giving you a basic inventory. Both tools are provided to you as open source by the SharePoint PnP team.

    • Reviewing requirements with your users. Consider asking your users to demonstrate how they use the existing sandbox solutions to perform their daily work.

    • Identifying, documenting, and designing new functionality to include in the new add-in for SharePoint. Consider reviewing your list of new feature requests from your users for additional ideas.

    • Identifying unused features, and agreeing with your users to omit this functionality from the new add-in for SharePoint.

    • For each solution, determining whether to replace it with an add-in for SharePoint or implement that either using with some out of the box capabilities or using some alternative solution.

  3. Solution planning. Design the new application using the SharePoint add-in model based on:

    • The requirements gathered in step 2.

    • Your analysis of the existing code. During your code analysis, consider identifying portions of the code that can be dropped (for example, the code is no longer being used, or the requirements have changed).

  4. Develop and test the SharePoint add-in model version of your application.

  5. Deploy your new add-in.

48.3 Replacing sandbox solution customizations

<a name=“sectionSection2”> </a>

Here’s typical customizations which are included in the sandbox solutions and potential transformation options. We are looking on adding further information for each of the customization types, so that you can will have real-world examples on the transformation options.

Customization Transformation options
Declarative solution with empty assembly <p>You can control assembly creation from Visual Studio solution project properties. See following KB article for detais - Remove assembly reference from your Sandbox solution created in Visual Studio. </p> <p>Important: that when you use the SharePoint Sandbox Solution scanner the scan output will list which solutions have an empty assembly + the tool will create updated sandbox solution packages for you in which the assembly is dropped. You can then simply replace the existing sandbox solution with the updated one.</p>
InfoPath form with code <p>If you have published an InfoPath form from the InfoPath client which contains code, it’s actually published to the SharePoint as a sandbox solution. This means that the form code is actually executed by the sandbox engine in SharePoint.</p> <p>Moving away from the code-based InfoPath forms is dependent on the actual business use case. There are multiple different options from generating custom UI as add-in or to utilize other form techniques.</p><p>See more details on the options via our InfoPath transformation guidance article</p>
Web Part <p>Web parts are typically converted either to add-in parts or they are implemented with fully client side technologies by using so called JavaScript embed pattern. </p><p>See following resources for additional information <lu><li>Customize your SharePoint site UI using JavaScript</li><li>Create add-in parts to install with your SharePoint Add-in</li><li>How to update your SharePoint pages via the embedding of JavaScript</li><li>Cross site collection navigation</li></lu></p>
Visual Web Part <p>Visual web parts are transformed in similar ways as normal web parts. User control used in visual web part will need to be also replaced, since in sandbox solution cases, it’s included inside of the assembly.</p>
Event Receiver <p>Event receivers can in many cases be replaced with the remote event receiver implementation. Remote event receivers do however need to hosted in some platform, typically on specific provider hosted add-in.</p><p>See following resources for additional information <lu><li>Use remote event receivers in SharePoint</li><li>How to use remote event receivers for your SharePoint add-ins</li></lu></p>
Feature Receiver <p>Are typically replaced with a remote API based operation, like using CSOM or REST for applying the needed customization or configuration to site level. If needed API is missing from the remote APIs (CSOM/REST), report this gap using SharePoint UserVoice.</p><p>Feature receivers are used for example to set a custom master page or theme to site, when they are activated. These kind of operations can be easily replaced with remote code based solutions or using PnP PowerShell, which provides easy commands for controlling site configuration.</p>
Custom workflow action <p>Typical code migration path for these kind of customizations is to start using either SharePoint 2013 workflows, move using alternative solutions, like Microsoft Flow or using third party solutions.</p>

We are looking for adding specific articles on the transformation techniques for specific technical scenarios.

48.4 Removing your sandbox code from your site

<a name=“sectionSection3”> </a>
When you deactivate your existing sandbox solution from sites, any assets or files deployed using declarative options will not be removed. If you have used sandbox solutions to introduce new code-based web parts, those functionalities will be disabled from the sites. This means that the pages are still rendering normally, so there’s no direct end user impact when solution is deactivated, except removal of the code-based functionalities, like web parts.

48.5 How will support of code-based sandbox solution be removed from SharePoint Online?

<a name=“howremoved”> </a>
Support will be removed by disabling code-based operations to be executed from sandbox-solution-based code. This means that your sandbox solutions will not be explicitly deactivated from the solution store, but any code based operation will no longer be performed. Sandbox solutions will ‘remain in activated’ status in the solution gallery. Features deployed using sandbox solutions will not get deactivated automatically, which means that possible code associated to feature deactivation or uninstall handlers won’t be run.

All declarative definitions in the sandbox solution will continue working after this change is be applied in SharePoint Online.

48.6 Additional resources

<a name=“bk_addresources”> </a>

49 Set external sharing on site collections in Office 365

You can control external sharing settings on a SharePoint site collection in Office 365, allowing external users (users who don’t have an organization account in your Office 365 subscription) access to your site collection.

Applies to: add-ins for SharePoint | SharePoint Online | Office 365

The Core.ExternalSharing code sample shows you how to control your external sharing settings on a SharePoint site collection. Use this solution to:

  • Control external sharing settings during your site provisioning process.

  • Prepare your site collection for sharing with external users.

Note: External sharing settings are only available in Office 365.

49.1 Before you begin

<a name=“sectionSection0”> </a>

To get started, download the Core.ExternalSharing sample add-in from the Office 365 Developer patterns and practices project on GitHub.

49.2 Using the Core.ExternalSharing app

<a name=“sectionSection1”> </a>

Verify that your Office 365 subscription allows external sharing. To do this:

  1. Open your Office 365 admin center.

  2. On the left navigation menu, choose EXTERNAL SHARING.

  3. Choose Sharing Overview.

  4. In Sites, ensure that Let external people access your sites is On.

Verify your external site settings on your SharePoint site collection. To do this:

  1. Open your Office 365 admin center.

  2. On the left navigation menu, choose SharePoint to open your SharePoint admin center.

  3. Select the check box next to the site collection URL that you want to verify your external sharing settings on.

  4. On the ribbon, choose Sharing.

  5. Review your external sharing settings in the sharing dialog. After running the code sample, return to the sharing dialog to verify that your external sharing settings changed.

When you run this code sample, Main in Program.cs performs the following tasks:

  • Gets the Office 365 admin center URL.

  • Gets the site collection URL to configure external sharing settings on.

  • Gets your Office 365 administrator credentials.

  • Calls GetInputSharing, which prompts the user to choose an external sharing setting ( SharingCapabilities) to apply to the site collection. The external sharing settings choices include:

    • Disabled, which turns off external sharing on the site.

    • ExternalUserAndGuestSharing, which enables external user and guest sharing on the site.

    • ExternalUserSharingOnly, which enables external user sharing only.

  • Calls SetSiteSharing.

Note The code in this article is provided as-is, without warranty of any kind, either express or implied, including any implied warranties of fitness for a particular purpose, merchantability, or non-infringement.

 static void Main(string[] args)
        {
           
            /* Prompt for your Office 365 admin center URL*/
            Console.WriteLine("Enter your Tenant Admin URL for your Office 365 subscription:");
            string tenantAdminURL = GetSite();

            /* End Program if no Office 365 admin center URL is supplied*/
            if (string.IsNullOrEmpty(tenantAdminURL))
            {
                Console.WriteLine("Hmm, i tried to work on it but you didn't supply your admin tenant url:");
                return;
            }
               
            // Prompt the user for an Office365 site collection 
            Console.WriteLine("Enter your Office 365 Site Collection URL:");
            string siteUrl = GetSite();

            /* Prompt for Credentials */
            Console.WriteLine("Enter Credentials for your Office 365 Site Collection {0}:", siteUrl);

            string userName = GetUserName();
            SecureString pwd = GetPassword();

            /* End program if no credentials are entered */
            if (string.IsNullOrEmpty(userName) || (pwd == null))
            {
                Console.WriteLine("Hmm, i tried to work on it but you didn't supply your credentials:");
                return;
            }

            try 
            {
                SharingCapabilities _sharingSettingToApply = GetInputSharing(siteUrl);
                using (ClientContext cc = new ClientContext(tenantAdminURL))
                { 
                    cc.AuthenticationMode = ClientAuthenticationMode.Default;
                    cc.Credentials = new SharePointOnlineCredentials(userName, pwd);
                    SetSiteSharing(cc, siteUrl, _sharingSettingToApply);
                }
            }
            catch(Exception ex)
            {
                Console.WriteLine("Oops, Mistakes can happen to anyone. An Error occured : {0}", ex.Message);
               
            }

            Console.WriteLine("Hit Enter to exit.");
            Console.Read();

        
        }

SetSiteSharing does the following:

  • Uses the Tenant.GetSitePropertiesByUrl to retrieve SiteProperties on your site collection.

  • Uses Tenant.SharingCapability to determine whether external sharing is enabled on your Office 365 subscription.

  • If sharing is enabled in your Office 365 subscription, sets the SiteProperties.SharingCapability to the external sharing settings the user entered.

public static void SetSiteSharing(ClientContext adminCC, string siteCollectionURl, SharingCapabilities shareSettings)
        {
            var _tenantAdmin = new Tenant(adminCC);
            SiteProperties _siteprops = _tenantAdmin.GetSitePropertiesByUrl(siteCollectionURl, true);
            adminCC.Load(_tenantAdmin);
            adminCC.Load(_siteprops);
            adminCC.ExecuteQuery();

            SharingCapabilities _tenantSharing = _tenantAdmin.SharingCapability;
            var _currentShareSettings = _siteprops.SharingCapability;
            bool _isUpdatable = false;

            if(_tenantSharing == SharingCapabilities.Disabled)
            {
                Console.WriteLine("Sharing is currently disabled in your tenant! I am unable to work on it.");
            }
            else
            {  
                if(shareSettings == SharingCapabilities.Disabled)
                { _isUpdatable = true; }
                else if(shareSettings == SharingCapabilities.ExternalUserSharingOnly)
                {
                    _isUpdatable = true;   
                }
                else if (shareSettings == SharingCapabilities.ExternalUserAndGuestSharing)
                {
                    if (_tenantSharing == SharingCapabilities.ExternalUserAndGuestSharing)
                    {
                        _isUpdatable = true;
                    }
                    else
                    {
                        Console.WriteLine("ExternalUserAndGuestSharing is currently disabled in your tenant! I am unable to work on it.");
                    }
                }
            }
            if (_currentShareSettings != shareSettings &amp;&amp; _isUpdatable)
            {
                _siteprops.SharingCapability = shareSettings;
                _siteprops.Update();
                adminCC.ExecuteQuery();
                Console.WriteLine("Set Sharing on site {0} to {1}.", siteCollectionURl, shareSettings);
            }
        }

49.3 Additional resources

<a name=“bk_addresources”> </a>

50 SharePoint development and design tools and practices

You can use SharePoint design and development tools to apply branding to your SharePoint sites.

Applies to: Office 365 | SharePoint 2013 | SharePoint Online

This article provides information about the development and design options that are available in SharePoint. You can also find information about how to use the remote provisioning pattern to apply branding assets to a SharePoint site.

50.1 Key SharePoint development and design terms and concepts

<a name=“sectionSection0”> </a>

Term or concept Definition More information
Design Manager A feature activated in SharePoint publishing sites or Team sites with publishing enabled that is used to import and manage site branding assets and export them to a design package. Use Design Manager to import branding assets created in other tools, such as Adobe PhotoShop or Adobe DreamWeaver, into SharePoint.<br/>SharePoint Designer is not available for use with OneDrive for Business or SharePoint Team sites where publishing is not enabled.
Design package Designed for use with SharePoint 2013 Publishing sites, contains branding assets that are stored in Design Manager. SharePoint 2013 Design Manager design packages
Remote provisioning A model that involves provisioning sites by using templates and code that runs outside SharePoint in a provider-hosted add-in. Site provisioning techniques and remote provisioning in SharePoint 2013<br/>Self-service site provisioning using apps in SharePoint 2013
Root web The first web inside a site collection. The root web is also sometimes referred to as the Web Application Root.
Sandboxed solutions .wsp files that contain assemblies, other non-compiled components, and an XML manifest file. A sandbox solution uses partial-trust code. Sandboxed solutions
SharePoint Designer 2013 An HTML designer and design asset management tool for managing branding elements in SharePoint. In SharePoint 2013, SharePoint Designer mainly supports custom workflows. What’s changed in SharePoint Designer 2013?<br/>What’s new with SharePoint 2013 site development?
.wsp file A SharePoint solution file. A .wsp is a .cab file that categorizes site assets and organizes them with a manifest.xml file. Solutions overview

50.2 Development options

<a name=“sectionSection1”> </a>

When you use SharePoint 2013 as a development platform, you’ll need to create an environment to develop, test, build, and deploy your content. For information about the options for development, see Development environment considerations in the article SharePoint Server 2013 Application Lifecycle Management. Table 2 lists the considerations for the various development options.

Options for SharePoint development, testing, and acceptance

|Option|Considerations|
|:—–|:—–|
|Team foundation server|- Located on Visual Studio Online for easy access.<br/>- Includes a centralized source code and life cycle management system.|
|Cloud test and acceptance environments|- Use a separate tenant for acceptance testing.<br/>- Separate test environment for on-premises testing.|
|On-premises test and acceptance environments|- Use for on-premises SharePoint deployments.<br/>- Hosted by customer on-premises or in Microsoft Azure.|

In most cases, you’ll need at least the following tenants, although this can vary depending on your requirements:

  • Developer tenant. As a best practice, provision and use your own developer site. This way, you avoid mixing your data with the production environment. To sign up for and provision a developer site, see Sign up for an Office 365 Developer Site in the article Sign up for an Office 365 Developer Subscription and set up your tools and environment.

  • Integration/testing tenant. Use this site to make sure that new apps and functionality work across more than one site collection and against the services and data in the production environment. Configure the environment to include capabilities that are in preview. (To do this, in your tenant admin console, choose Service Settings, and then under the Updates setting, choose First Release.) You can use Visual Studio online to run automated testing and any other continuous integration testing.

  • Production tenant. Release tested, accepted, and approved apps to this tenant. You can create a developer site on this tenant to develop and test apps that are small in scope or have isolated impact. In general, avoid mixing your development and production environments.

50.3 Design tools

<a name=“sectionSection2”> </a>

Use standard web design and development tools, such as HTML, images, CSS files, and JavaScript files to create SharePoint site branding assets. For example, you can use Adobe DreamWeaver and Adobe PhotoShop to design the HTML, CSS, JavaScript, and image files you’ll use to brand your SharePoint sites. Alternatively, you can use SharePoint Designer 2013 to create, manage, and customize branding assets, or create custom solutions in Visual Studio 2013.

50.3.1 SharePoint design packages and .wsp files

Design packages are .wsp files created by Design Manager that follow predictable rules for packaging design assets. They are, essentially, sandboxed solutions. If you’re using another tool to package branding assets in a .wsp file, your branding assets will be in a less fixed and predictable state.

The design package includes all files that have been customized. For example, if you create a page layout that uses a custom content type, the design package includes the page layout, the custom content type it uses, and all custom site columns. The design package also includes several files related to any composed looks that have been applied to your SharePoint site, including files uploaded to the following locations:

  • Site assets library

  • Style library

  • Master Page gallery

If you applied composed looks to a site before you applied custom branding, the design package will include files with .themedcss and .themedpng file extensions. To apply the branding assets in a design package to a SharePoint site, export the design package and use the remote provisioning pattern to apply the contents of the design package.

SharePoint 2013 includes the APIs that you can use to work with design packages. If you’re using either SSOM, CSOM, or JSOM, you can use the DesignPackage or DesignPackageInfo classes.

50.3.1.1 Using the design package CSOM to apply the contents of design packages to a SharePoint site

The following example shows how to use the Design Package APIs in the remote provisioning pattern to apply the contents of design packages to a SharePoint site.

This code was designed for use with Publishing sites. Although it is possible to use the Design Packages API to apply branding to Team sites that have the Publishing feature enabled, this can introduce long-term support issues.

Note The code in this article is provided as-is, without warranty of any kind, either express or implied, including any implied warranties of fitness for a particular purpose, merchantability, or non-infringement.

using Microsoft.SharePoint.Client;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
using Microsoft.SharePoint.Client.Publishing;
namespace ProviderSharePointAppWeb
{
    public partial class Default : System.Web.UI.Page
    {
        protected void Page_PreInit(object sender, EventArgs e)
        {
            Uri redirectUrl;
            switch (SharePointContextProvider.CheckRedirectionStatus(Context, out redirectUrl))
            {
                case RedirectionStatus.Ok:
                    return;
                case RedirectionStatus.ShouldRedirect:
                    Response.Redirect(redirectUrl.AbsoluteUri, endResponse: true);
                    break;
                case RedirectionStatus.CanNotRedirect:
                    Response.Write("An error occurred while processing your request.");
                    Response.End();
                    break;
            }
        }

        protected void Page_Load(object sender, EventArgs e)
        {
            // Use TokenHelper to get the client context and Title property.
            // To access other properties, the add-in might need to request permissions
            // on the host web.
            var spContext = SharePointContextProvider.Current.GetSharePointContext(Context);
            
            // Publishing feature GUID to use the infrastructure for publishing. 
            Guid PublishingFeature = Guid.Parse("f6924d36-2fa8-4f0b-b16d-06b7250180fa");

            // The site-relative URL of the design package to install.
            // This sandbox design package should be uploaded to a document library.
            // For practical purposes, this can be a configuration setting in web.config.
            string fileRelativePath = @"/sites/devsite/brand/Dev.wsp";

            //string fileUrl = @"https://SPXXXXX.com/sites/devsite/brand/Dev.wsp";
            
        
            using (var clientContext = spContext.CreateUserClientContextForSPHost())
            {
                // Load the site context explicitly or while installing the API, the path for
// the package will not be resolved.
                // If the package cannot be found, an exception is thrown. 
                var site = clientContext.Site;
                clientContext.Load(site);
                clientContext.ExecuteQuery();
              
                // Validate whether the Publishing feature is active. 
                if (IsSiteFeatureActivated(clientContext,PublishingFeature))
                {
                    DesignPackageInfo info = new DesignPackageInfo()
                    {
                        PackageGuid = Guid.Empty,
                        MajorVersion = 1,
                        MinorVersion = 1,
                        PackageName = "Dev"
                    };
                    Console.WriteLine("Installing design package ");
                    
                    DesignPackage.Install(clientContext, clientContext.Site, info, fileRelativePath);
                    clientContext.ExecuteQuery();

                    Console.WriteLine("Applying design package");
                    DesignPackage.Apply(clientContext, clientContext.Site, info);
                    clientContext.ExecuteQuery();
                }
            }
        }
        public  bool IsSiteFeatureActivated( ClientContext context, Guid guid)
        {
            var features = context.Site.Features;
            context.Load(features);
            context.ExecuteQuery();

            foreach (var f in features)
            {
                if (f.DefinitionId.Equals(guid))
                    return true;
            }
            return false;
        }
 
    }
}

50.3.1.2 Using FileCreationInformation to upload branding assets and a master page to a Team site

You can use SharePoint 2013 CSOM functionality to install and uninstall design packages and export design packages to SharePoint Online sites. For example, use the SP.Publishing.DesignPackage.install Method (sp.publishing) to install the design package on the site, as shown in the following example.

public static void Install(
        ClientRuntimeContext context,
        Site site,
        DesignPackageInfo info,
        string path
)

The DesignPackageInfo class specifies metadata that describe the contents of the design package to be installed. Use the Uninstall method to uninstall the design package from the site, as shown in the following example.

public static void UnInstall(
        ClientRuntimeContext context,
        Site site,
        DesignPackageInfo info
)

If you need to brand a Team site with the Publishing feature enabled, or a Publishing site on SharePoint Online, you can use the ExportEnterprise or the ExportSmallBusiness method to export design packages for site templates to the Solution Gallery. Use the ExportSmallBusiness method with the small business site template, and use the ExportEnterprise method for all other site templates, as shown in the following example. In the example, note thatpackageName is a string that represents the name of the design package.

public static ClientResult<DesignPackageInfo> ExportEnterprise(
        ClientRuntimeContext context,
        Site site,
        bool includeSearchConfiguration
)

When you use this method, you can include the search configuration in the design package, as shown in the next example. Note that all design package methods operate at the level of the site collection.

public static ClientResult<DesignPackageInfo> ExportSmallBusiness(
        ClientRuntimeContext context,
        Site site,
        string packageName,
        bool includeSearchConfiguration
)

50.4 Design tool options for SharePoint Online

<a name=“sectionSection3”> </a>

The tools you can use to brand a SharePoint Online site depend on your SharePoint Online edition and the type of site you want to build. The Small Business edition, for example, includes one Team site and one public site. It does not include a Publishing site. You can use the Site Builder add-in in SharePoint Online to customize public site branding.

The Enterprise edition includes a Team site collection at the root web application for the domain that does not include Publishing. It does not include a public site. Use Design Manager to manage SharePoint site branding elements for the Publishing site in the SharePoint Online Enterprise edition.

50.5 Additional resources

<a name=“bk_addresources”> </a>

51 SharePoint pages and the page model

This article introduces the SharePoint page model, including master pages, content pages, parts of a SharePoint page, and default page file types.

Applies to: Office 365 | SharePoint 2013 | SharePoint Online

A rendered SharePoint page is a combination of three page types:

  • A master page, which controls the layout and appearance of the content.

  • A content page, which contains the page field controls.

  • A user-friendly authoring page, which is where the user adds content.

This article provides an overview of the SharePoint page model, including the page types, the default page files that are available in SharePoint 2013 and SharePoint Online, and information about how pages are processed.

<a name=“sectionSection0”> </a>

Term or concept Definition Access via More information
Collaboration site A team site.
Content placeholder An entry in a master page that reserves a space for controls or content that can be programmatically replaced later. All SharePoint master pages Content placeholders are the building blocks of SharePoint master pages.
Master page A page that standardizes the behavior and presentation of the left and top navigation elements of a SharePoint page. SharePoint file system Master Page Gallery
Master page gallery A special document library in SharePoint 2013 where all branding elements - master pages, page layouts, JavaScript files, CSS, and images - are stored by default. Every site has its own Master Page Gallery. Settings > Site Settings > Master Pages and Page Layouts The Master Page Gallery contains catalogs that store branding assets such as master pages and CSS files.<br/>Tip When you create custom branding elements, store custom assets in the default Master Page Gallery file structure.<br/>Master pages, the Master Page Gallery, and page layouts in SharePoint 2013
Minimal Download Strategy (MDS) A strategy that reduces the amount of data that the browser must download when users navigate from one SharePoint page to another. Site settings When MDS is active, SharePoint passes all page requests through /_layouts/15/start.aspx and checks for visual differences between new page requests and the previously loaded page.<br/>Optimize page performance in SharePoint 2013<br/>Minimal Download Strategy overview
Navigation Functionality that enables users to move around the information architecture of a SharePoint site. Navigation elements in SharePoint include search, tree controls, buttons, the ribbon, hyperlinks, tabs, menus, and taxonomy. Navigation class<br/>NavigationNode class
Oslo master A default master page in SharePoint 2013. SharePoint file system Master Page Gallery Unlike the seattle.master master page, the current navigation is in the same position as the top navigation area.
Page content control A control on a publishing site where a Web Part can be added.
Page layout A template applied to a Publishing page that enforces the consistent presentation of content. SharePoint file system Master Page Gallery How to: Create a page layout in SharePoint 2013
Page model The files, content, and interactions that result in a SharePoint page rendered to users in a browser. Overview of the SharePoint 2013 page model
Publishing page An .aspx page in a Publishing site. PublishingPage class
Publishing site A SharePoint site that can access publishing sites and pages, which include page layouts, taxonomy, managed navigation, and other web content management and enterprise content management features. PublishingWeb class What’s new with SharePoint 2013 site development
Seattle.master A default master page in SharePoint 2013. SharePoint file system Master Page Gallery
Team site A site designed for users to collaborate on documents, wikis, ideas, processes, and so on.
Text layout Defines the content areas that appear on a Wiki page.
Text layout control A wiki page control that can contain text, images, Web Parts, and App Parts.
Top-level site The default, top-level site provided by the server. Create a SharePoint site
Web Part Server-side controls that run inside the context of site pages. Custom actions and property bag entries from a SharePoint app
Web Part page A content page made up of Web part zones, which can contain Web Parts. Web Parts are represented on Web Part pages by WebPartDefinition objects. Microsoft.SharePoint.Client.WebParts namespace
Web Part zone An area on a page where a Web Part can be added.
Wiki page A content page that uses the Enterprise Wiki site template. Provisioning.Pages sample app

51.2 SharePoint master pages

<a name=“sectionSection1”> </a>

A master page is an ASP.NET file with a .master extension. It includes a <%@ Master directive, and defines the top-level HTML elements such as HTML, Head, and Form. It first lists controls and assemblies, and then declares a Document Type Definition of DOCTYPE, which tells the browser how to render the HTML. SharePoint 2013 is tuned to work best with the XHTML 1.0 and HTML5 DOCTYPES.

SharePoint includes several master pages by default. These master pages provide the default structure and chrome of a given SharePoint page that is appropriate for the SKU and site type, where these are applicable—specifically, on the top and left sides of the page. Table 2 lists the default SharePoint 2013 and SharePoint Online master pages.

Table 2. Default SharePoint master pages

|Master page|Description|
|:—–|:—–|
|Custom.master|System pages, such as forms and views. Used by all SharePoint 2013 and SharePoint Online SKUs.|
|Default.master|Site pages in publishing sites. Included in all SharePoint 2013 and SharePoint Online SKUs. Available when the publishing feature is activated.|
|Application.master|Some system pages, such as scope.aspx and keyword.aspx. Included in all SharePoint 2013 and SharePoint Online SKUs.|
|Minimal.master|Available default master page option in all SharePoint 2013 SKUs.|
|Seattle.master|Available default master page option in all SharePoint 2013 and SharePoint Online SKUs.|
|Oslo.master|Available default master page option in all SharePoint 2013 and SharePoint Online SKUs.|
|Kyoto.master|A master page available in SharePoint Online. |
|Berlin.master|A master page available in SharePoint Online. |
|Lyon.master|A master page available in SharePoint Online. |
|Mysite15.master|OneDrive for Business sites (previously: My Site, personal sites, or OneDrive Pro sites).|
Each default SharePoint master page includes controls that are required for common web programming technologies such as HTML, CSS, and JavaScript, to function in SharePoint.

Content placeholders hold the place for information defined in content pages. Content placeholders correspond to areas of a page. Each area of a .master page is defined by between a few and hundreds of content placeholders.

SharePoint master pages use a mix of ASP.NET ( <asp:) and SharePoint ( <SharePoint:) declarations. The text after the colon in a declaration defines the control’s functionality; for example, SharePoint:PlaceholderGlobalNavigation embeds the global navigation of a SharePoint page into the relevant HTML tags on that page. Content controls in a master page bind content placeholders to content with the ContentPlaceHolderID.

SharePoint provides two types of master pages: system master pages andsite master pages. System master pages are applied to all form pages and view pages on a SharePoint site. Site master pages, on the other hand, are used by all pages in a Publishing site. You can tell which kind of master page a site is using by opening the .master page file and viewing the Page directive. A system master page has a page directive as follows: ~masterurl/default.master. A site master page has the following page directive: ~masterurl/custom.master.

You can use CSOM code to set master page properties—mainly by writing code against the Web object. Change the system master page by using its MasterUrl property, and change the site master page by using the object’s CustomMasterUrl property.

Content placeholders often include dynamic tokens, which are important pieces of code that form part of a SharePoint page URL. SharePoint parses URL strings according to the rules of protocols, such as HTTP, that define how hypertext information is transferred between the server and a SharePoint page. Usually, a content placeholder that points to a CSS or theme control will use a relative URL, which in the SharePoint server-side object model is represented as ~SPUrl.

SharePoint uses dynamic tokens to bind the master page to the content page, which is defined in <asp:content> declaration of .master page code. Table 3 lists dynamic tokens that are found in SharePoint master pages, and either the CSOM properties that replace them when the page is processed, or the form of the URL string that SharePoint renders for that content placeholder.

Table 3. Dynamic tokens in master pages replaced by property values

Dynamic token Replaced with
~masterurl/default.master SPWeb.MasterUrl
~masterurl/custom.master SPWeb.CustomMasterUrl
~site/<xyz>.master http://<siteColl>/<subsite1>/<subsite2>/<xyz>.master
~sitecollection/<abc>.master http://<siteColl>/<abc>.master

Note The dynamic tokens in content placeholders correspond to server-side API properties and methods. When using remote provisioning, write code in CSOM or REST.To learn more about dynamic tokens and SharePoint URLs, see URLs and Tokens in SharePoint 2013. Add-ins for SharePoint use some tokens that apply to site URLs.

51.3 Web Part pages and Wiki pages

<a name=“sectionSection2”> </a>

Web Part pages can contain structured and unstructured information. They are made up of Web Part zones. Web Parts placed in Web Part zones can display data from lists, search results, and queries, and can present custom views of data from multiple sources. A Web Part page contains most of the same elements as a standard SharePoint Team site. The Title bar can contain a title, caption, description, company logo, or other image. The Web Part Page adds the following elements:

  • A Web Part Page menu that can be used to add or modify Web Parts, design the page layout, and switch between personal and shared views.

  • A tool pane used to find and add Web Parts and edit properties related to Web Parts and the Web Part page.

Compared to Web Part pages, wiki pages are less structured. Because of their semi-structured to unstructured form, they make it easy for users to create content and collaborate with each other. By default, SharePoint displays a wiki page the first time you view a new Team site.

Enterprise wiki functionality is available in all versions of SharePoint. The Enterprise Wiki template makes it possible to create and use page layouts with wiki pages. When you edit a wiki page, Web Parts, text, and other content is displayed in the text layout. The text layout arranges content areas on a wiki page.

You can use the remote provisioning pattern to create a wiki page. The WikiPageCreationInformation class provides methods you can use to create the wiki page, while the WikiHtmlContent property gets and sets HTML content on the page. The Utility class includes a CreateWikiPageInContextWeb method, which SharePoint uses to create the wiki page in the client runtime context using parameters from the WikiPageCreationInformation class.

51.4 Page layouts

<a name=“sectionSection3”> </a>

The page layout is the content page of choice for Publishing sites. Page layouts are templates that define different kinds of pages in a SharePoint site, such as articles, by customizing the structure of the body of the page. Just as the Web Part page is a template that exists to arrange Web Part zones and Web Parts on a page, page layouts exist to arrange fields on a page. The field controls defined in a page layout will contain content that an author creates, and the structure of that content will be based on the page layout.

Note Page layouts can include Web Part zones.

Designers can apply styles to page field controls. This gives designers control over how CSS is applied to each field and rendered, yet allows users to create and manage content in each page field.

In SharePoint, content types are reusable collections of metadata (also known as columns) and behavior that define specific items and documents. For example, you might want to create a kind of content that looks and behaves the way you think an online magazine article would. Content types make it possible for you to do that. You might also want to create other unique kinds of content, but reuse and share characteristics of one content type in others. Every page layout is based on exactly one content type. Every content type is assigned a unique Content Type ID.

To learn more about content types, see Introduction to Content Types, Columns, and Custom Information in Content Types.

Important Currently, you can use the remote provisioning pattern to apply out-of-the-box page layouts to a SharePoint site. Although you can provision custom content types on a site by using CSOM code via custom add-ins for SharePoint code, and setting custom ContentTypeId via CSOM is supported in SharePoint Online, setting the ContentTypeId for a custom content type via remote provisioning on on-premises SharePoint sites is not currently supported. For more information, see How to: Create a page layout in SharePoint 2013.

51.5 SharePoint page processing model

<a name=“sectionSection4”> </a>

SharePoint is a template-based page rendering system that combines master pages, content pages, and authored content to render pages. The page rendering system is known as the page processing model. Master pages are used by all page instances in the site to which they are applied, and content pages are used by all instances of the page that are based on that content page.

The page processing model interprets and runs all the requests that user agents such as web browsers make to the server. For example, consider a user requesting a page called contoso.aspx. To complete the request, the ASP.NET engine retrieves two pages: the content page associated with contoso.aspx, and the master page that the file provider associated with the SharePoint site. The engine also retrieves the field controls and Web Parts from fields and renders them on the page.

Note The page processing logic for Team sites and sites is similar to that for Publishing pages.

51.5.1 Page processing

When a SharePoint user loads a Web Part page, SharePoint gets it by parsing the path to its template, page content, and context. It also sets the Web Parts associated with the Web Part page, assigns a WebPartCollection instance to the page, and populates the Web Part page and its Web Parts with content.

When a SharePoint user loads a wiki page (either by using the Enterprise Wiki template on a Team site or a Publishing site), SharePoint gets it by parsing the path to its template, page content, and context. It also sets the text layout control associated with the wiki page, and populates the enterprise wiki page and its text layout with content. To learn more about how to provision a wiki page by using the remote provisioning pattern, see the Provisioning.Pages sample.

51.5.2 Minimal download strategy and <AjaxDelta> controls

In SharePoint, the minimal download strategy feature manages which specific content on a master page to refresh before the page renders. When the strategy is enabled, the content associated with content placeholders wrapped in <SharePoint:AjaxDelta> tags on the master page refreshes before the page renders. Conversely, content placeholders not wrapped in <SharePoint:AjaxDelta> tags does not render when the minimal download strategy is enabled.

You can enable or disable the minimal download strategy through central site administration or by using the SharePoint client-side object model (CSOM). You can activate the feature by using the EnableMinimalDownload property. For more information, see Minimal Download Strategy overview. For more information about how to optimize a master page to work well with the minimal download strategy, see Modify SharePoint components for MDS.

The minimal download strategy feature is enabled by default on SharePoint Team sites, and disabled by default on SharePoint Publishing sites and SharePoint Team sites with Publishing enabled.

51.5.3 Creating a custom master page based on seattle.master

You can use remote provisioning to provision site branding elements such as themes to a site, and you can use CSS or JavaScript to show or hide elements or page controls. Customizing a master page provides an additional level of control over the page structure. When you create a custom master page, do not edit and then save a default master pageby using its default name (for example, seattle.master). Instead, make a copy of the default master page you want to modify, and rename it.

Important Because of the potential long-term impact of ongoing support costs and maintenance, we recommend that you do not change the structure of a new master page. You can make changes to the master page that support branding that don’t affect the structure, such as changing colors in the header, adding a color background to specific elements of a page, or showing and hiding a site logo. If the default .master page you’re using does not include a structural element, such as a footer, that you want to include on your page, use a different out-of-the-box master page.

To help maintain consistency in a custom master page, follow the existing coding pattern. For example, in areas of the page that use tables, reinforce the coding pattern by using tables. In areas where <DIV> tags or HTML5 are used, match any custom code with <DIV> tags or HTML5. In the long run, this will make any custom master pages that you have to create easier to maintain, and therefore, less expensive.

51.6 Additional resources

<a name=“bk_addresources”> </a>

52 SharePoint site branding and page customization solutions

Use the SharePoint page model and composed looks, the SharePoint 2013 theming engine, and CSS to brand your SharePoint site and pages.

Applies to: Office 365 | SharePoint 2013 | SharePoint Online

You can customize the look and feel of a SharePoint site in two ways:

  • By using the theming engine to create custom themes (composed looks in SharePoint 2013 and SharePoint Online). At a minimum, themes define colors. A complete theme defines colors, fonts, a background image, and the associated master page, and a .preview file that defines how the .master page is previewed. You can use the remote provisioning pattern to apply themes to sites.

  • By creating custom cascading style sheets (CSS) to apply to SharePoint Online sites. You can use an app for SharePoint and the remote provisioning pattern to provision SharePoint sites to use custom CSS.

Branding changes range from low-cost and simple to high-cost and complex. Users can use the UI to apply composed looks, which include a background image, color palette, fonts, a master page, and an associated .preview file for the master page. You can use the SharePoint 2013 theming engine to design composed looks and provision sites, and you can create custom CSS to modify the look and feel of your site and its elements.

Important Although it’s possible to create custom master pages and other structural elements as part of a custom branding project, the long-term cost of supporting custom master pages and other custom structural elements is high. Custom branding can make it more costly for your organization to apply upgrades and provide ongoing support.

This section builds on your knowledge about SharePoint development and design tools and practices, to show you how to customize the look and feel of a SharePoint site.

<a name=“sectionSection0”> </a>

Term or concept Definition More information
Alternate CSS A CSS file other than the default that you can apply to the look and feel of your site. Use alternate CSS to apply custom CSS to a site and all of its subsites.
CSS A language that tells a browser how to render an HTML or XML document’s styles. CSS separates document content (HTML or XML) from how the content is presented.
Composed look A combination of fonts, a color palette, a background image, and an associated master page that are applied to the site. Font scheme and color images are optional. The following is the default file location for composed looks: Theme Gallery\15 folder <p>Composed looks are a convenient way to change the look and feel of sites without making any changes to the structure of a site.</p><p>SharePoint 2013 ships several composed looks by default. When a user applies a composed look, SharePoint applies all the associated design elements of the composed look to a site.</p>
Content Search Web Part (CSWP) Renders content from search results based on a specified query. <p>Content Search Web Part in SharePoint 2013 (TechNet) </p><p>Content Search Web Part in SharePoint 2013 (MSDN)</p>
corev15.css The CSS file that contains most of the main functionality for SharePoint. The following is the default file location:_layouts\15 folder||
|CSSRegistration|A reference in a master page, such as seattle.master, that loads most CSS that is applied to most of the default UI.|Use the CSSRegistration control in a master page to override default CSS.|
|Custom action|Actions you can use to customize and interact with lists and the ribbon on the host web.| How to: Create custom actions to deploy with SharePoint Add-ins|
|Device channels|Render a single SharePoint publishing site in more than one way by using unique channels to target content rendering on specific devices.| SharePoint 2013 Design Manager device channels|
|Display templates|Templates used by Search Web Parts to show the results of a query made to the search index.| SharePoint 2013 Design Manager display templates|
|Image rendition|Display differently sized versions of an image on a publishing site based on the same source image.| SharePoint 2013 Design Manager image renditions|
|Managed metadata| Features in SharePoint that enable you to define terms, term sets, groups, and labels for terms. Sometimes referred to as taxonomy. In SharePoint 2013, the managed metadata system is the foundation for managed navigation.| Managed metadata and navigation in SharePoint 2013|
|Managed navigation|Navigation for publishing sites that is built based on managed metadata. Navigation is built from a specified term set in the term store. | Managed navigation in SharePoint 2013|
|Master page|A page that standardizes the behavior and presentation of the left navigation and top navigation areas of a SharePoint page.| Master pages, the Master Page Gallery, and page layouts in SharePoint 2013|
|Master Page Gallery|A special document library in SharePoint 2013 where all branding elements-master pages, page layouts, JavaScript files, CSS, and images-are stored by default. Every site has its own Master Page Gallery.When you create custom branding elements, we recommend that you store custom assets in the default Master Page Gallery file structure.| Master pages, the Master Page Gallery, and page layouts in SharePoint 2013|
|Oslo.master|A master page in SharePoint 2013.|Moves the current navigation into the same position as the top navigation region. |
|Page content control|A control on a publishing site where a Web Part can be added.||
|Page layout|A template for a SharePoint publishing site page that lets users lay out information on the page in a consistent way.||
|Quick Launch|Manages the navigation elements on the left side of the page of a collaboration site.|You can add heading links to group navigation items.|
|REST|A stateless architectural style that abstracts architectural elements and uses HTTP verbs to read and write data from web pages that contain XML files.| Get started with the SharePoint 2013 REST service|
|Root web|The first web page in a site collection.|The root web is also sometimes referred to as the Web Application Root. |
|Seattle.master|The default .master page for SharePoint 2013 team sites and publishing sites.||
|Site layout|See master page.|The site layout combines the .master page of a theme with its corresponding .preview file.|
|Structured navigation|A navigation structure for publishing sites that is based on the site hierarchy of the publishing site. You can add headers and links to manually replace or customize the structured navigation that SharePoint automatically generates.| How to: Customize Navigation in SharePoint Server 2010 (ECM)|
|Theme|A simple way to apply light branding to a SharePoint site. The default file location for themes is the _themes folder of the site.|<p>Themes are an easy way to apply custom branding to SharePoint sites.</p><p>Themes overview for SharePoint 2013</p><p> How to: Deploy a custom theme in SharePoint 2013</p>|
|Theming engine|A set of files and functionality that define the look, feel, behavior, and file associations of composed looks.||
|User Agent String|Information that a browser passes to a website that identifies the software that makes the request from the server.| SharePoint 2013 Design Manager device channels|
|User Custom Action|A CSOM property that returns the collection of custom actions for a website, list, or site collection. The default file location is the following: 15\TEMPLATE\FEATURES<p>For example:</p><p><HideCustomAction GroupId=“Galleries”<br />   HideActionId=“Themes”<br />   Location=“Microsoft.SharePoint.SiteSettings”></p>|<p>UserCustomAction class</p><p>How to: Create custom actions to deploy with SharePoint Add-ins</p><p>How to: Work with User Custom Actions Default Custom Action Locations and IDs</p>|

52.2 In this section

<a name=“sectionSection1”> </a>

Article Shows you how to…
Use composed looks to brand SharePoint sites Apply composed looks, including colors, fonts, and a background image, to your SharePoint 2013 and SharePoint Online sites.
Use remote provisioning to brand SharePoint pages Use remote provisioning to interact with themes in SharePoint.
Use CSS to brand SharePoint pages Use remote provisioning to interact with themes in SharePoint.
Customize a SharePoint page by using remote provisioning and CSS Use CSS to customize SharePoint rich text fields and Web Part Zones.
Update the branding of existing SharePoint sites and page regions Customize and then refresh the branding of existing SharePoint sites or regions of SharePoint pages, including the ribbon, the site navigation, the Settings menu, the tree view, and the page content.
Customize OneDrive for Business site branding Customize OneDrive for Business sites in Office 365 or by using the add-in model, depending on your organization’s requirements.

52.3 Additional resources

<a name=“bk_addresources”> </a>

53 Synchronize term groups sample add-in for SharePoint

As part of your Enterprise Content Management (ECM) strategy, you can synchronize term groups across multiple SharePoint term stores.

Applies to: Office 365 | SharePoint 2013 | SharePoint Online

The Core.MMSSync sample shows you how to use a provider-hosted add-in to synchronize a source and target taxonomy. This add-in synchronizes two term stores in the managed metadata service - a source and a target term store. The following objects are used to synchronize term groups:

  • TermStore

  • ChangeInformation

Use this solution if you want to:

  • Synchronize two taxonomies. For example, you might use both SharePoint Online and SharePoint Server 2013 on-premises for different sets of data, but they use the same taxonomy.

  • Synchronize changes made to a specific term group only.

53.1 Before you begin

<a name=“sectionSection0”> </a>

To get started, download the Core.MMSSync sample add-in from the Office 365 Developer patterns and practices project on GitHub.

Before you run this add-in, you’ll need permission to access the term store in the managed metadata service. Figure 1 shows the Office 365 admin center where these permissions are assigned.

Figure 1. Assigning permissions to the term store in the SharePoint admin center

Screenshot that shows the SharePoint admin center, with the term store, the taxonomy term store search box, and the term store administrators boxes highlighted.

To assign permissions to the term store:

  1. From the Office 365 admin center, choose term store.

  2. In TAXONOMY TERM STORE, choose the term set that you want to assign an administrator to.

  3. In Term Store Administrators, enter the organizational account that requires term store administrator permissions.

53.2 Using the Core.MMSSync sample app

<a name=“sectionSection1”> </a>

When you start the add-in, you see a console application, as shown in Figure 2. You are prompted to enter the following information:

  • The URL of the Office 365 admin center that contains the source term store (this is the URL of the source managed metadata service). For example, you might enter https://contososource-admin.sharepoint.com.

  • The user name and password of a term store administrator on your source managed metadata service.

  • The URL of theOffice 365 admin center that contains the target term store (this is the URL of the target MMS). For example, you might enter https://contosotarget-admin.sharepoint.com.

  • The user name and password of a term store administrator on your target managed metadata service.

  • The type of operation you want to perform. You can either:

    • Move a term group (scenario 1) by using the TermStore object.

    • Process changes (scenario 2) by using the ChangeInformation object.

Important This sample add-in works with both SharePoint Online and SharePoint Server 2013 on-premises.

Figure 2. Core.MMSSync console application

Screenshot of the console application prompting for information to be entered.

After you select your scenario, enter the name of the term group you want to synchronize from your source to your target managed metadata service, as shown in Figure 3. For example, you might enter Enterprise.

Figure 3. Term groups in the managed metadata service

Screenshot of the taxonomy term store drop-down list.

53.2.1 Scenario 1 - Move term group

When you select Move Term Group, the add-in prompts you to enter a term group to synchronize and then calls the CopyNewTermGroups method in MMSSyncManager.cs. CopyNewTermGroups then does the following to copy a term group from the source term store to the target term store:

  1. Retrieves the source and target term store objects.

  2. Verifies that the languages of the source and target term stores match.

  3. Verifies that the source term group doesn’t exist in the target term store, and then copies the source term group to the target term store by using CreateNewTargetTermGroup.

You can set the TermGroupExclusions, TermGroupToCopy, and TermSetInclusions parameters to filter which terms get processed.

The following code shows the CopyNewTermGroups and CreateNewTargetTermGroup methods in MMSSyncManager.cs.

Note The code in this article is provided as-is, without warranty of any kind, either express or implied, including any implied warranties of fitness for a particular purpose, merchantability, or non-infringement.

public bool CopyNewTermGroups(ClientContext sourceContext, ClientContext targetContext, List<string> termGroupExclusions = null, string termGroupToCopy = null)
        {
            TermStore sourceTermStore = GetTermStoreObject(sourceContext);
            TermStore targetTermStore = GetTermStoreObject(targetContext);

            
            List<int> languagesToProcess = null;
            if (!ValidTermStoreLanguages(sourceTermStore, targetTermStore, out languagesToProcess))
            {
                Log.Internal.TraceError((int)EventId.LanguageMismatch, "The target termstore default language is not available as language in the source term store, syncing cannot proceed.");
                return false;
            }

            // Get a list of term groups to process. Exclude site collection-scoped groups and system groups.
            IEnumerable<TermGroup> termGroups = sourceContext.LoadQuery(sourceTermStore.Groups.Include(g => g.Name,
                                                                                                       g => g.Id,
                                                                                                       g => g.IsSiteCollectionGroup,
                                                                                                       g => g.IsSystemGroup))
                                                                                              .Where(g => g.IsSystemGroup == false &amp;&amp; g.IsSiteCollectionGroup == false);
            sourceContext.ExecuteQuery();

            foreach (TermGroup termGroup in termGroups)
            {
                // Skip term group if you only want to copy one particular term group.
                if (!String.IsNullOrEmpty(termGroupToCopy))
                {
                    if (!termGroup.Name.Equals(termGroupToCopy, StringComparison.InvariantCultureIgnoreCase))
                    {
                        continue;
                    }
                }

                // Skip term groups that you do not want to copy.
                if (termGroupExclusions != null &amp;&amp; termGroupExclusions.Contains(termGroup.Name, StringComparer.InvariantCultureIgnoreCase))
                {
                    Log.Internal.TraceInformation((int)EventId.CopyTermGroup_Skip, "Skipping {0} as this is a system termgroup", termGroup.Name);
                    continue;
                }

                // About to start copying a term group.
                TermGroup sourceTermGroup = GetTermGroup(sourceContext, sourceTermStore, termGroup.Name);
                TermGroup targetTermGroup = GetTermGroup(targetContext, targetTermStore, termGroup.Name);

                if (sourceTermGroup == null)
                {
                    continue;
                }
                if (targetTermGroup != null)
                {
                    if (sourceTermGroup.Id != targetTermGroup.Id)
                    {
                        // Term group exists with a different ID, unable to sync.
                        Log.Internal.TraceWarning((int)EventId.CopyTermGroup_IDMismatch, "The term groups have different ID's. I don't know how to work it.");
                    }
                    else
                    {
                        // Do nothing as this term group was previously copied. Term group changes need to be 
                        // picked up by the change log processing.
                        Log.Internal.TraceInformation((int)EventId.CopyTermGroup_AlreadyCopied, "Termgroup {0} was already copied...changes to it will need to come from changelog processing.", termGroup.Name);
                    }
                }
                else
                {
                    Log.Internal.TraceInformation((int)EventId.CopyTermGroup_Copying, "Copying termgroup {0}...", termGroup.Name);
                    this.CreateNewTargetTermGroup(sourceContext, targetContext, sourceTermGroup, targetTermStore, languagesToProcess);
                }
            }

            return true;
        }



private void CreateNewTargetTermGroup(ClientContext sourceClientContext, ClientContext targetClientContext, TermGroup sourceTermGroup, TermStore targetTermStore, List<int> languagesToProcess)
        {
            TermGroup destinationTermGroup = targetTermStore.CreateGroup(sourceTermGroup.Name, sourceTermGroup.Id);
            if (!string.IsNullOrEmpty(sourceTermGroup.Description))
            {
                destinationTermGroup.Description = sourceTermGroup.Description;
            }

            TermSetCollection sourceTermSetCollection = sourceTermGroup.TermSets;
            if (sourceTermSetCollection.Count > 0)
            {
                foreach (TermSet sourceTermSet in sourceTermSetCollection)
                {
                    sourceClientContext.Load(sourceTermSet,
                                              set => set.Name,
                                              set => set.Description,
                                              set => set.Id,
                                              set => set.Contact,
                                              set => set.CustomProperties,
                                              set => set.IsAvailableForTagging,
                                              set => set.IsOpenForTermCreation,
                                              set => set.CustomProperties,
                                              set => set.Terms.Include(
                                                        term => term.Name,
                                                        term => term.Description,
                                                        term => term.Id,
                                                        term => term.IsAvailableForTagging,
                                                        term => term.LocalCustomProperties,
                                                        term => term.CustomProperties,
                                                        term => term.IsDeprecated,
                                                        term => term.Labels.Include(label => label.Value, label => label.Language, label => label.IsDefaultForLanguage)));

                    sourceClientContext.ExecuteQuery();

                    TermSet targetTermSet = destinationTermGroup.CreateTermSet(sourceTermSet.Name, sourceTermSet.Id, targetTermStore.DefaultLanguage);
                    targetClientContext.Load(targetTermSet, set => set.CustomProperties);
                    targetClientContext.ExecuteQuery();
                    UpdateTermSet(sourceClientContext, targetClientContext, sourceTermSet, targetTermSet);

                    foreach (Term sourceTerm in sourceTermSet.Terms)
                    {
                        Term reusedTerm = targetTermStore.GetTerm(sourceTerm.Id);
                        targetClientContext.Load(reusedTerm);
                        targetClientContext.ExecuteQuery();

                        Term targetTerm;
                        if (reusedTerm.ServerObjectIsNull.Value)
                        {
                            try
                            {
                                targetTerm = targetTermSet.CreateTerm(sourceTerm.Name, targetTermStore.DefaultLanguage, sourceTerm.Id);
                                targetClientContext.Load(targetTerm, term => term.IsDeprecated,
                                                                     term => term.CustomProperties,
                                                                     term => term.LocalCustomProperties);
                                targetClientContext.ExecuteQuery();
                                UpdateTerm(sourceClientContext, targetClientContext, sourceTerm, targetTerm, languagesToProcess);
                            }
                            catch (ServerException ex)
                            {
                                if (ex.Message.IndexOf("Failed to read from or write to database. Refresh and try again.") > -1)
                                {
                                    // This exception was due to caching issues and generally is thrown when terms are reused across groups.
                                    targetTerm = targetTermSet.ReuseTerm(reusedTerm, false);
                                }
                                else
                                {
                                    throw ex;
                                }
                            }
                        }
                        else
                        {
                            targetTerm = targetTermSet.ReuseTerm(reusedTerm, false);
                        }

                        targetClientContext.Load(targetTerm);
                        targetClientContext.ExecuteQuery();

                        targetTermStore.UpdateCache();

                        // Refresh session and term store references to force reload of the term just added. You need 
                        // to do this because there can be an update change event following next, and if you don't,
                        // the newly created term set cannot be obtained from the server.
                        targetTermStore = GetTermStoreObject(targetClientContext);

                        // Recursively add the other terms.
                        ProcessSubTerms(sourceClientContext, targetClientContext, targetTermSet, targetTerm, sourceTerm, languagesToProcess, targetTermStore.DefaultLanguage);
                    }
                }
            }
            targetClientContext.ExecuteQuery();
        }

53.2.2 Scenario 2 - Process changes

When you select Process Changes, the add-in prompts you to enter a Term Group to synchronize, and then calls the ProcessChanges method in MMSSyncManager.cs. ProcessChanges uses the GetChanges method of the ChangedInformation class to retrieve all changes made to groups, term sets, and terms in the source managed metadata service. Changes are then applied to the target managed metadata service.

Note This document includes only some parts of the ProcessChanges method. To review the entire method, open the Core.MMSSync solution in Visual Studio.

The ProcessChanges method starts by creating a TaxonomySession object.

Log.Internal.TraceInformation((int)EventId.TaxonomySession_Open, "Opening the taxonomy session");
            TaxonomySession sourceTaxonomySession = TaxonomySession.GetTaxonomySession(sourceClientContext);
            TermStore sourceTermStore = sourceTaxonomySession.GetDefaultKeywordsTermStore();
            sourceClientContext.Load(sourceTermStore,
                                            store => store.Name,
                                            store => store.DefaultLanguage,
                                            store => store.Languages,
                                            store => store.Groups.Include(group => group.Name, group => group.Id));
            sourceClientContext.ExecuteQuery();

Next, it retrieves changes by using the ChangeInformation object, and setting the start date on the ChangeInformation object. This example retrieves all changes that were made within the last year.

Log.Internal.TraceInformation((int)EventId.TermStore_GetChangeLog, "Reading the changes");
            ChangeInformation changeInformation = new ChangeInformation(sourceClientContext);
            changeInformation.StartTime = startFrom;
            ChangedItemCollection termStoreChanges = sourceTermStore.GetChanges(changeInformation);
            sourceClientContext.Load(termStoreChanges);
            sourceClientContext.ExecuteQuery();

The GetChanges method returns a ChangedItemCollection, which enumerates all changes occurring in the term store, as shown in the following code example. The last line of the example checks to determine whether the ChangedItem was a term group. ProcessChanges includes code to perform similar checks on the ChangedItem for term sets and terms.

foreach (ChangedItem _changeItem in termStoreChanges)
                {
                    
                    if (_changeItem.ChangedTime < startFrom)
                    {
                        Log.Internal.TraceVerbose((int)EventId.TermStore_SkipChangeLogEntry, "Skipping item {1} changed at {0}", _changeItem.ChangedTime, _changeItem.Id);
                        continue;
                    }

                    Log.Internal.TraceVerbose((int)EventId.TermStore_ProcessChangeLogEntry, "Processing item {1} changed at {0}. Operation = {2}, ItemType = {3}", _changeItem.ChangedTime, _changeItem.Id, _changeItem.Operation, _changeItem.ItemType);

                    #region Group changes
                    if (_changeItem.ItemType == ChangedItemType.Group)

The changed item type might be a term group, term set, or term. Each changed item type has different operations you can perform on it. The following table lists the operations that you can perform on each changed item type.

What changed? (ChangedItemType) Operations you can perform on changed item type (ChangedOperationType)
Group <p>Delete group</p><p>Add group</p><p>Edit group
TermSet </p>Delete term set</p><p>Move term set</p><p>Copy term set</p><p>Add term set</p><p>Edit term set<p>
Term </p>Delete term</p><p>Move term</p><p>Copy term</p><p>Path change term</p><p>Merge term</p><p>Add term</p><p>Edit term<p>

The following code shows how to perform a delete operation when a term group was deleted in the source managed metadata service.

#region Delete group
                        if (_changeItem.Operation == ChangedOperationType.DeleteObject)
                        {
                            TermGroup targetTermGroup = targetTermStore.GetGroup(_changeItem.Id);
                            targetClientContext.Load(targetTermGroup, group => group.Name);
                            targetClientContext.ExecuteQuery();

                            if (!targetTermGroup.ServerObjectIsNull.Value)
                            {
                                if (termGroupExclusions == null || !termGroupExclusions.Contains(targetTermGroup.Name, StringComparer.InvariantCultureIgnoreCase))
                                {
                                    Log.Internal.TraceInformation((int)EventId.TermGroup_Delete, "Deleting group: {0}", targetTermGroup.Name);
                                    targetTermGroup.DeleteObject();
                                    targetClientContext.ExecuteQuery();
                                }
                            }
                        }
                        #endregion

53.3 Additional resources

<a name=“bk_addresources”> </a>

54 Solution guidance

54.1 Branding and site provisioning

54.1.1 Page model

54.1.2 Development and design tools and practices

54.1.3 Site branding

54.1.3.1 Use composed looks to brand sites

54.1.3.2 Use remote provisioning to brand pages

54.1.3.3 Use CSS to brand pages

54.1.3.4 Customize elements of a SharePoint page

54.1.3.5 Update the branding of existing sites and regions

54.1.3.6 Customize OneDrive for Business site branding

54.1.4 Site provisioning

54.1.4.1 Implement a site classification solution

54.1.4.2 Modify host web lists at creation time

54.1.4.3 Create content types by using CSOM

54.1.4.4 Modify site permissions and get external sharing status

54.1.4.5 Manage users and groups

54.1.5 Metadata, site navigation, and publishing site features

54.1.6 UX Components

54.1.6.1 Customize the UX

54.1.6.2 Create UX controls

54.1.6.3 Display information from a host site

54.1.6.4 Improve performance

54.2 Customizing the “modern” experiences in SharePoint Online

54.2.1 Provisioning “modern” team sites programmatically

54.2.2 Customizing “modern” team sites

54.2.3 Customizing “modern” lists and libraries

54.2.4 Customizing “modern” site pages

54.3 Building well performing SharePoint Online portals

54.3.1 Performance guidance

54.3.2 Information Architecture

54.3.4 Data aggregation guidance

54.3.5 Branding your portals

54.3.6 Portal go live approach

54.4 Composite business add-ins

54.4.1 Migrate InfoPath forms

54.4.2 Data storage options

54.4.3 Corporate event app integration

54.4.4 Call web services from workflows

54.5 ECM solutions

54.5.1 Document library templates

54.5.2 Autotagging

54.5.3 Information management

54.5.4 Records management extensions

54.5.5 Taxonomy operations

54.5.6 Bulk upload documents

54.5.7 Upload large files

54.5.8 Synchronize term groups

54.5.9 Supporting % and # in file and folder with the ResourcePath API

54.6 Localization solutions

54.6.1 Use localization features

54.6.2 Localize UI elements

54.7 Search solutions

54.7.1 Search customizations

54.8 Security and Performance

54.8.1 Authorize provider-hosted add-in users at run time

54.8.2 Authorization considerations for tenants hosted in the Germany, China or US Government environments

54.8.3 Cross-domain images in SharePoint provider-hosted add-ins

54.8.4 Elevated privileges in SharePoint Add-ins

54.8.5 How to provide add-in app only tenant administrative permissions in SharePoint Online

54.8.5.1 Developing using Tenant permissions with App-Only

54.8.6 Set external sharing in Office 365

54.8.7 Handle SharePoint Online throttling

54.8.8 JavaScript Patterns and Performance

54.9 SharePoint Add-in recipes

54.9.1 App-Only and Elevated privileges

54.9.2 Branding SharePoint Sites

54.9.3 Custom Actions

54.9.4 Custom Field Type

54.9.5 Custom Ribbons

54.9.6 Customize your SharePoint site UI using JavaScript embedding

54.9.7 Delegate Controls

54.9.8 Document ID Provider

54.9.9 Event Receivers and List Event Receivers

54.9.10 Feature Stapling

54.9.11 Information Management Policy

54.9.12 JavaScript customizations

54.9.13 List Definition / List Template

54.9.14 List Instance

54.9.15 Localization

54.9.16 Master Pages

54.9.17 MMS manipulation

54.9.18 Modules

54.9.19 OneDrive for Business customization

54.9.20 Performance Considerations

54.9.21 Remote Timer Jobs

54.9.22 Remote event receivers

54.9.23 Search API Usage

54.9.24 Search Configuration

54.9.25 SharePoint change log

54.9.26 Site Columns and Content Types

54.9.27 Site Provisioning

54.9.28 Use asynchronous operations in SharePoint Add-ins

54.9.29 User Controls and Web Controls

54.9.30 User Profile Manipulation

54.9.31 Variations

54.9.32 Web Part

54.9.33 Upload Web Parts

54.9.34 Connect SharePoint add-in parts

54.9.35 Workflows, Actions (Activities), Events, and Forms

54.9.36 Yammer Integration

54.10 Transform farm solutions to the SharePoint add-in model

54.10.1 Replace content types and site columns

54.10.2 Replace files deployed using modules in farm solutions

54.10.3 Replace lists created from list definitions

54.10.4 Replace Web Parts with add-in parts

54.11 Sandbox Solution Transformation Guidance

54.11.1 Web Parts

54.11.2 Event receivers

54.11.3 Feature receivers

54.11.4 InfoPath forms with code

54.12 User Profile Solutions

54.12.1 Read or update user profile properties

54.12.2 Bulk User Profile update API

54.12.3 Migrate user profile properties

54.12.4 Personalize search results

54.12.5 Upload user profile pictures

54.13 Deploying your SharePoint add-ins

54.13.1 Deploy Sites to Microsoft Azure

54.13.2 Use Azure WebJobs with Office 365

54.13.3 Configure Provider-Hosted Add-ins for Distribution

54.13.4 Configure Office 365 Projects for Distribution

54.14 PnP remote provisioning

54.14.1 Introducing the PnP Provisioning Engine

54.14.2 PnP provisioning framework

54.14.3 PnP provisioning engine and the core library

54.14.4 PnP provisioning schema

54.14.5 Provisioning console application sample

54.15 PnP remote timer job framework

54.15.1 The Timer Job Framework

54.15.2 Create remote timer jobs in SharePoint

54.15.3 Getting Started with WebJobs (“timer jobs”)

54.16 PnP PowerShell reference

55 Taxonomy operations sample app for SharePoint

As part of your Enterprise Content Management (ECM) strategy, you can create and read taxonomy data on a SharePoint list.

Applies to: Office 365 | SharePoint 2013 | SharePoint Online

The Core.MMS sample console application shows you how to interact with the SharePoint managed metadata service to create and retrieve terms, term sets, and groups. This sample will also run in a provider-hosted app, such as an ASP.NET MVC web application. Use this solution if you want to migrate terms between SharePoint farms or display terms in your custom app.

55.1 Before you begin

<a name=“sectionSection0”> </a>

To get started, download the Core.MMS sample app from the Office 365 Developer patterns and practices project on GitHub.

Before you run this app, you’ll need:

  • The URL of your SharePoint site.

  • Permission to access the term store in the managed metadata service. Figure 1 shows the Office 365 admin center where these permissions are assigned.

    Figure 1. Assigning permissions to the term store in the SharePoint admin center

    Screenshot of the SharePoint admin center with the term store, taxonomy term store search box, and term store administrators boxes highlighted.

To assign permissions to the term store:

  1. From the Office 365 admin center, choose term store.

  2. In TAXONOMY TERM STORE