Saturday, October 31, 2009

Attending TechEd Europe 2009 in Berlin

Im attending TechEd Europe 2009 in Berlin as an ATE and will be arriving late Sunday, so if you want to connect then give me a ping ;-)

Two of my colleagues from Inceptio (Per Østergaard and Risto Pedersen) will also be at the TechEd and Peter Ingerslev and I is at the Interact 2009 event on Monday.

CU there !

TechEd_Europe_Blog_L_MVPs

Friday, October 30, 2009

Executing a PowerShell Pipeline from your own program

In Executing PowerShell code from within your own program I showed how to execute PowerShell code, but the interface was rather rude as the value had to be converted to string and added to the script code involved. If you have lots of objects to proces, this method is inefficient – a pipeline would be much better as the script source only has to be parsed once and the script would be able to access the objects as real objects.

I have now taken it a step further and created a piece of code that can execute a pipeline. If you have lots of objects – ‘lots’ depends on the amount of memory you have available – this may not be the most efficient method. All input and output objects will be represented as List<PSObject> giving some overhead. An alternative would be to use a steppable pipeline but for now, I’ll stick to the easy method.

Here’s the code

public class PSHost
{
private PSDataStreams _InvokePipeline(string Script,
List<PSObject> inputPipeline,
out Collection<PSObject> outputPipeline)
{

PowerShell PS = PowerShell.Create();
PS.AddScript(Script);

outputPipeline=PS.Invoke<PSObject>(inputPipeline);
return PS.Streams;
}
private string _ConvertStreamsToString(PSDataStreams streams)
{
string s = "";
foreach (DebugRecord result in PS.Streams.Debug)
{
s += "DEBUG: " + result.ToString();
}
foreach (VerboseRecord result in PS.Streams.Verbose)
{
s += "VERBOSE: " + result.ToString();
}
foreach (WarningRecord result in PS.Streams.Warning)
{
s += "WARNING: " + result.ToString();
}
foreach (ErrorRecord result in PS.Streams.Error)
{
s += "ERROR: " + result.ToString();
}
return s;

}

public string PSExecutePipeline(string Script,
List<PSObject> inputPipeline,
out Collection<PSObject> outputPipeline)
{

PSDataStreams ds = _InvokePipeline(Script, inputPipeline, out outputPipeline);
return _ConvertStreamsToString(ds);
}
}











The script must include $input to pick up the values – if any. The streams collection is returned as a string, so you can check for warning/error/verbose/debug/progress output. Parsing errors will throw an exception; you would use a try-catch to handle that. Naturally, you can change the interface to your own requirements.


Note that output objects may be real objects with properties or just simple value types. A property value can be retrieved with p.property[“propertyname”].value and a simple with p.BaseObject.



Example 1 – summing numbers




// Input pipeline
List<PSObject> numbers = new List<PSObject>();

// Add values/objects
numbers.Add(new PSObject(1));
numbers.Add(new PSObject(2));
numbers.Add(new PSObject(3));
numbers.Add(new PSObject(4));
numbers.Add(new PSObject(5));

// Output pipeline
System.Collections.ObjectModel.Collection<PSObject> outputPipeline;

PSHost psh = new PSHost();
string s = psh.PSExecutePipeline("$input | measure-object -sum", numbers, out outputPipeline);
if (string.IsNullOrEmpty(s))
{
foreach (PSObject p in outputPipeline)
{
System.Diagnostics.Debug.WriteLine("Sum is " + p.Properties["Sum"].Value.ToString());
}
}
else
{
System.Diagnostics.Debug.WriteLine("PowerShell failed: " + s);
}









Example 2 demonstrates how to return a value type




// Re-uses the variable from example 1
List<PSObject> empty = new List<PSObject>();
s = psh.PSExecutePipeline("100", empty, out outputPipeline);
if (string.IsNullOrEmpty(s))
{
foreach (PSObject p in outputPipeline)
{
if (p.BaseObject.GetType().IsValueType)
{
System.Diagnostics.Debug.WriteLine("Number is " + p.BaseObject.ToString());
}
else
{
System.Diagnostics.Debug.WriteLine("Number is " + p.Properties["Value"].Value.ToString());
}
}
}
else
{
System.Diagnostics.Debug.WriteLine("PowerShell failed: " + s);
}





Debug output can be seen with DbgView.



Have fun

Thursday, October 29, 2009

Using ILM for DMZ Account Management

When you need to manage your users and group in a DMZ, it can be done in several ways. At least these methods can be used -

  • Autonomous. All users and groups are maintained locally in the DMZ.
  • Forest trust. A forest trust is constructed so the DMZ forest trust internal accounts. To restrict.
  • Read-only Domain Controller. A RODC is deployed in the DMZ.

Besides these, ILM can also be used and when you read the advantages/disadvantages below, you may or may not decide that ILM is the best for your environment.

Method Advantages Disadvantages
Autonomous
  • Can be totally isolated
  • No firewall setup
  • No information leak from internal AD
  • Expensive owing to added administration
  • Distributed administration
  • No SSO
Forest trust
  • Good integration with internal AD
  • SSO with Kerberos
  • Access to DMZ for internal AD objects can be restricted with Selective authentication
  • Complex firewall setup
  • Cannot pick internal objects with GUI picker (unless you open for LDAP from non-DC)
  • Have to change the registry to allow RDP logon on WS03. See KB 902336
Read-only domain controller
  • A single forest
  • SSO
  • Exposure can be controlled by managing what sensitive properties are on the RODC
  • Windows Server 2008 only
  • Complex control of updates like DDNS
  • Cannot limit users and groups visible
ILM
  • Simple firewall setup
  • No inbound firewall rules
  • No ILM license if you use IIFP (Enterprise Edition is required). See more below.
  • Single place of management in the internal AD. Everything is pushed to DMZ including password changes. Helpdesk does not need access to DMZ and not even special procedures for changing properties.
  • Full control of what object that are visible in DMZ
  • No leak of other objects (trust cannot be queried)
  • SSO or at least same password
  • Requires an extra infrastructure component
  • SQL license (unless existing can be used)
  • May not have full SSO to all services (re-enter password)

The seems fine, you say. I know how to do the other stuff, how complex is it to implement the ILM solution? Well, with ILM 2007 you have to create at least some code or get the code from someone who have made it (like Inceptio). But besides this, the rest is standard components. A very rough plan looks like this -

  • Install ILM
  • Install and configure PCNS on the internal AD domain controllers to capture password changes
  • Enable password management in ILM
  • Create an DMZ AD management agent, specifying it as password target
  • Create an internal AD management agent. Specify it as password source and the DMZ AD Management Agent as target
  • Create a management agent that can figure out what objects should be replicated to DMZ (use a group membership, naming convention or some other property). Let this populate an expectedDN property. If the logic is simple, it could be done in an MVExtension. In the solution I have made, I have done it using attribute-value-property files and PowerShell code.
  • Flow the properties you want to keep in sync from the internal AD and export them to the DMZ AD

With Forefront Identity Manager (FIM) 2010, you should be able to get rid of the coding part, making the solution more attractive. When it comes to IIFP, that is not supported anymore. Microsoft removed the download at some time but made it available again. From my sources at Microsoft, no IIFP version of FIM will be available, so you have to buy it.

Monday, October 26, 2009

Executing PowerShell code from within your own program

In V2 this got a lot easier. Here’s my PSExecute function. The idea is that you send in a text value and a script. The script references the value as $_. All output from the script is returned as the result of PSExecute. Exceptions (e.g. syntax errors), errors etc. are returned as well. Depending on the usage, this may not be what you need, but you can change it yourself.

I’m planning to use this in combination with ILM. Having the ability to use a piece of PowerShell script in an advanced flow rule or in the MVExtension seems very useful.

A script could be $_.toupper() etc.

private String PSExecute(String Script, String InputValue)
{
// Create PS outside the function if called multiple times for performance reasons
PowerShell PS = PowerShell.Create();

// The script could probably be parsed once to speed things up if the same script is used repeatably
PS.AddScript("'" + InputValue + "' | foreach {" + Script + "}");
String s = "";
try
{
foreach (PSObject result in PS.Invoke())
{
s += result.ToString();
} // End foreach.
}
catch (Exception e)
{
s += "EXCEPTION: " + e.Message; // ToString();
// just continue
}
foreach (DebugRecord result in PS.Streams.Debug)
{
s += "DEBUG: " + result.ToString();
}
foreach (VerboseRecord result in PS.Streams.Verbose)
{
s += "VERBOSE: " + result.ToString();
}
foreach (WarningRecord result in PS.Streams.Warning)
{
s += "WARNING: " + result.ToString();
}
foreach (ErrorRecord result in PS.Streams.Error)
{
s += "ERROR: " + result.ToString();
}

return s;
}





If you want to try it from PowerShell first, here’s the statements -




$script='$input | fforeach {$_.toupper()}'
$script='$_.toupper()'
$ps=[System.Management.Automation.powershell]::create()
$ps.AddScript('''' + $Inputvalue + ''' | foreach {' + $script + '}')
$ps.invoke()
$ps.Streams





Have fun!

Getting PowerShell V2 SDK

Messing around to find it, I’ll make it easier for you!

Unlike V1, the SDK is not a separate download, but part of Microsoft Windows SDK for Windows 7 and .NET Framework 3.5 SP1. After installing it (you can exclude the Win32 parts), add references to the relevant DLLs found in C:\Program Files\Reference Assemblies\Microsoft\WindowsPowerShell\v1.0.

 

Have fun

Tuesday, October 13, 2009

Exchange Web Services Managed API Release Candidate – Part 2

Developing the every day Line Of Business Applications (LOBA) often includes handling various types of information. When dealing with custom CRM system you often need to collect and bind typed data sources together. One of these sources is often emails but of its good features i.e. handling unstructured information is also its worst. Fortunately the Exchange Server 2007 SP1 and EWS api provides us with tools and handles to enable us to structure and categorize all types of information in a Exchange Mailbox Store. When making connections between two systems some patterns always emerge. The first is you need an unique item key from the source system to store in the destination system to make a one way connection. This strategy has a flaw because when the source item is deleted you need to remove the reference key in the destination. In this case people tend to delete items though outlook and not your LOBA and outlook does not contact you LOBA to tell you have to delete the reference. So the before mentioned strategy flaw was the tight coupling of information between the source and destination.

So let me suggest another solution. Lets say you what to connect you LOBA with a item from a Exchange Mailbox. Instead of storing the Exchange Item ID in you LOBA try instead tag the Exchange Item with the item id from you LOBA you want to connect with. Then from the LOBA, search the Mailbox for items tagged with the LOBA item id. In this way when some elements has been deleted you do not end up with lost keys because the search will not return items that has been deleted. For making this work we need to handle to things: Item Extended Properties and the ability to search for these items.

Working with extended properties

To illustrate how to work with extended properties I will make use of an email object and store it in the default Innbox

   1: //Setup the request. es is an instance of ExchangeService



   2: EmailMessage em = new EmailMessage(es);



   3: em.Subject = "Hello World!";



   4: em.Body = "Your base belongs to us!";



   5: //Creating the Extended property CustomerID as a type of string



   6: ExtendedPropertyDefinition epd = new ExtendedPropertyDefinition(DefaultExtendedPropertySet.Common, "CustomerID", MapiPropertyType.String);



   7: //Give the property a value and assigning it to the email



   8: em.SetExtendedProperty(epd, "1234");



   9: // Store the message in the inbox



  10: em.Save(WellKnownFolderName.Inbox);




Searching form Extended Properties





   1: //Defining a view/windows into the search result.



   2: ItemView view = new ItemView(50);



   3: //Telling that i am only interested in the id of the item.



   4: view.PropertySet = new PropertySet(BasePropertySet.IdOnly);



   5: //Creating a extended property definition to search for.



   6: ExtendedPropertyDefinition epdCustomerID = new ExtendedPropertyDefinition(DefaultExtendedPropertySet.Common, "CustomerID", MapiPropertyType.String);



   7:  



   8: //Variable for storing the searchresult



   9: FindItemsResults<Item> foundItems;



  10:  



  11: do



  12: {   



  13:     //Search the inbox for at item with a extended property value stet to 1234



  14:     foundItems = es.FindItems(WellKnownFolderName.Inbox, new SearchFilter.IsEqualTo(epdCustomerID, "1234"), view);



  15:     foreach (Item current in foundItems)



  16:     {



  17:         //Getting the email message from the Exchange Service via the item id.



  18:         EmailMessage email = EmailMessage.Bind(es, current.Id) ;



  19:         Console.WriteLine("Message found. Subject: {0} - Body: {1}", email.Subject, email.Body);



  20:     }



  21:     //Offset the index of the view/windows with 50. This gives the ablity to page through larger resulset.



  22:     view.Offset += 50;



  23:     //while there more result available...



  24: } while (foundItems.MoreAvailable);




Other cool things



The item id can still be very valuable for your LOBA. If you store the id in you LOBA you can always find the item again even if a user decides to move the message to another folder. By using the EmailMessage.Bind method with the item id you will always get the message returned. Same thing goes for folders they also have a id. So even if the users decide to reorganize the folders with the folder id you can still find the folder again.

Tuesday, October 06, 2009

Exchange Web Services Managed API Release Candidate – Part 1

For some time now developers have been able to access Exchange Server 2007 via Exchange Web Services(EWS). To get this working you basically added a webreference in Visual Studio and constructed a request pending upon the type of function you wanted to execute. EWS was a great step away from the various message api(CDO, MAPI, WebDaV) which had their own strength and weakness. But still EWS can be laborious and you often end up building your own stub upon the EWS webreference. Through my teaching of Developing for Microsoft Unified Communications R2 i got to know a new API called Exchange Web Services Managed API. In short this is the way – this is the api for programming for Exchange Server 2007 with sp1. In the next few blogs I will run through some of the basics for this new API. As a first sending a email would be a place to start.

Sending email with EWS(c#)

   1: // Setup the request



   2: MessageType emailMessage = new MessageType();



   3:  



   4: // Set the To email address



   5: emailMessage.ToRecipients = new EmailAddressType[1];



   6: emailMessage.ToRecipients[0] = new EmailAddressType();



   7: emailMessage.ToRecipients[0].EmailAddress = "email@receiver.somewhere";



   8:  



   9: // Set the From email address



  10: emailMessage.From = new SingleRecipientType();



  11: emailMessage.From.Item = new EmailAddressType();



  12: emailMessage.From.Item.EmailAddress = "email@sender.somewhere";



  13:  



  14: // Set the Subject



  15: emailMessage.Subject = this.textBoxSubject.Text;



  16:  



  17: // Set the Body



  18: emailMessage.Body = new BodyType();



  19: emailMessage.Body.BodyType1 = BodyTypeType.Text;



  20: emailMessage.Body.Value = "Something to write about";



  21:  



  22:  



  23: // Create request object and set properties



  24:  



  25: // Create the CreateItemType in order to create and send the email



  26: CreateItemType request = new CreateItemType();



  27: request.Items = new NonEmptyArrayOfAllItemsType();



  28: request.Items.Items = new ItemType[1];



  29: request.Items.Items[0] = emailMessage;



  30: request.MessageDisposition = MessageDispositionType.SendAndSaveCopy;



  31: request.MessageDispositionSpecified = true;



  32:  



  33:  



  34: // Call CreateItem and process response



  35:  



  36: // Finally, call the CreateItem method to create and send the email



  37: CreateItemResponseType response = _service.CreateItem(request);



  38:  



  39: // Analyze the response message for success/failure



  40: ItemInfoResponseMessageType responseMessage = response.ResponseMessages.Items[0] as ItemInfoResponseMessageType;



  41:  



  42: if (responseMessage.ResponseClass == ResponseClassType.Success)



  43: success = true;



  44:  



  45: MessageBox.Show("Email send succeeded: " + success);




Sending email with EWS Managed API(c#)





   1: //Creating service binding for a Exchange Server 2007



   2: ExchangeService es = new ExchangeService(ExchangeVersion.Exchange2007_SP1);



   3:  



   4: //Assigning credentials for accessing the sender account



   5: es.Credentials = new System.Net.NetworkCredential("sender@email.somewhere", "SuperSecret#1");



   6:  



   7: //using autodiscover to find righ Exchange From End



   8: es.AutodiscoverUrl("pli@inceptio.dk");



   9:  



  10: //Creating a email message



  11: EmailMessage em = new EmailMessage(es);



  12: em.Subject = "Hello World!";



  13: em.Body = "Using EWS Managed API";



  14: em.ToRecipients.Add("receiver@emai.somewhere");



  15: em.SendAndSaveCopy();