Tuesday, December 08, 2009

MS PM comments on CUCiMOC(kup) and the joint Cisco / Microsoft support statement

Take a look at the blog post Cisco: Just Like Any Other Office Communications Server ISV?   for some official words (finally) from Microsoft on Cisco’s CUCiMOC and Microsoft Office Communications Server integration.

He hits the nail on the many questions I have raised at customers, including the supportability problems in terms of integration at the desktop level with program version dependencies and especially the (lost) user experience when using CUCiMOC. Furthermore Currently Cisco is setting a Version 8.0 / Q3 CY10 date for supporting Windows 7 (And also with limited support for 64 bit).

Furthermore (as he also notes) it is important to note the dependencies and lost features (RDP Sharing, Audio/Video/Webconferencing etc – that you would need to buy from Cisco).

Read the joint support statement here  and Haberkorns blog post for more info.

Thursday, November 19, 2009

Messing with output from Format-Table

Found at question at psobject.codeplex.com:

I was wondering if I could write the outout without any blank spaces between 2 fields. For example, I am using hash tables to display 2 columns from dir output. But it always comes up with a blank space between those 2 data elements. I need this to generate a fixed format output with data elements only and no spaces in between. Any help is greatly appreciated.

$column1 = @{expression="mode";width=5;label="mode";alignment="left"}
$column2 = @{expression="name";width=10;label="name";alignment="left"}

$dir |format-table $column1,$column2

$mode  name
----  ----
d---- download
d---- extract
-a--- alias.txt
-a--- Compute...
-a--- execute...
-a--- get_dn.ps1
-a--- hh

Well, it can be done. I looked into the objects the Format-Table spit out and after some poking around, I came up with this -

$column1 = @{expression="mode";width=5;label="mode";alignment="left"}
$column2 = @{expression="name";width=20;label="name";alignment="left"}

# Save widths, all non-fixed length value should specify a width
$widths=@{}
dir | format-table $column1,@{l="|";e={"|"}},$column2 | foreach {
if ($_.pstypenames[0] -eq "Microsoft.PowerShell.Commands.Internal.Format.FormatEntryData") {
# Capture the values and convert them to one value
$value=""
$count=$_.formatentryinfo.formatPropertyFieldList.count
foreach($i in 0..($count-1)) {
$value+=$_.formatentryinfo.formatPropertyFieldList.item($i).propertyvalue.ToString().padright($widths.$i)
}
# Delete all but one field
$_.formatentryinfo.formatPropertyFieldList.removerange(1,$count-1)
# and update its value
$_.formatentryinfo.formatPropertyFieldList.item(0).propertyValue=$value
$_
}
elseif ($_.pstypenames[0] -eq "Microsoft.PowerShell.Commands.Internal.Format.FormatStartData") {
# Capture the headers and convert them to one header
$value=""
$width=0

$count=$_.shapeinfo.tablecolumninfolist.count
foreach($i in 0..($count-1)) {
$w=$_.shapeinfo.tablecolumninfolist.item($i).width
$width+=$w
$widths.$i=$w
$value+=$_.shapeinfo.tablecolumninfolist.item($i).propertyname.ToString().padright($w)
}
# Delete all but one field
$_.shapeinfo.tablecolumninfolist.removerange(1,$count-1)
# and update its value
$_.shapeinfo.tablecolumninfolist.item(0).propertyName=$value
$_.shapeinfo.tablecolumninfolist.item(0).width=$width
$_
}
else
{
$_
}
}







If you like it, convert it to a function as en exercise ;)



 



Happy formatting!

PS Remoting to Home Server

I wanted to PowerShell remote to my home server, but as it is – for the good reason, that it is impossible – not in the domain of my PC, I have to add it to TrustedHosts.

This is my story.

First, I enabled PS remoting on the home server with a simple

Enable-PSRemoting





Next, I attempted to access the server




enter-pssession server -cre server\administrator





By doing so, I received a very long error message.



I tried with –authentication negotiate. This reduced the error message to 10 lines ;) It told me to configure TrustedHosts with Winrm.cmd.



I looked at Winrm.cmd, but that looked very complicated and this is at the end of the day. Luckily, the WSMAN drive popped up in my mind. I switched to my administrative account and did this




PS C:\Users\user> cd wsman:
PS WSMan:\> dir

WSManConfig:

ComputerName Type
------------ ----
localhost Container

PS WSMan:\> cd .\localhost
PS WSMan:\localhost> dir


WSManConfig: Microsoft.WSMan.Management\WSMan::localhost

Name Value Type
---- ----- ----
MaxEnvelopeSizekb 150 System.String
MaxTimeoutms 60000 System.String
MaxBatchItems 32000 System.String
MaxProviderRequests 4294967295 System.String
Client Container
Service Container
Shell Container
Listener Container
Plugin Container
ClientCertificate Container


PS WSMan:\localhost> cd .\Client
PS WSMan:\localhost\Client> dir


WSManConfig: Microsoft.WSMan.Management\WSMan::localhost\Client

Name Value Type
---- ----- ----
NetworkDelayms 5000 System.String
URLPrefix wsman System.String
AllowUnencrypted false System.String
Auth Container
DefaultPorts Container
TrustedHosts System.String


PS WSMan:\localhost\Client> Set-Item .\TrustedHosts server

WinRM Security Configuration.
This command modifies the TrustedHosts list for the WinRM client. The computers in the TrustedHosts list might not be authenticated. The
client might send credential information to these computers. Are you sure that you want to modify this list?
[Y] Yes [N] No [S] Suspend [?] Help (default is "Y"): y
PS WSMan:\localhost\Client>









Back to my standard user




194 WSMan:\localhost> enter-pssession server -cre server\administrator
[server]: PS C:\Documents and Settings\Administrator\My Documents>







and it worked.



 



Happy remoting

Tuesday, November 17, 2009

Laissez-Faire Access Control

Bruce Schneier have an abstract of a paper which claims that enabling users to get the access they need while auditing the access is better than a centrally controlled setup.

Read it for yourself – at least read the abstract.

Monday, November 16, 2009

PowerShell 2.0 *is* supported for Exchange 2007 SP2

In case you wondered, the greatly improved PowerShell 2.0 can be installed on servers running Exchange 2007. But you must be running Service Pack 2.

BTW: You cannot find PowerShell v2 on www.microsoft.com/downloads. You have to go to http://support.microsoft.com/kb/968929.

Thursday, November 12, 2009

Implementing OCS presence in Outlook Live Server (a.k.a. Exchange 2010 OWA)

The tools required to implement OCS presence in Exchange 2010 OWA has been released

Microsoft Office Communications Server 2007 R2 Web Trust Tool

Microsoft Office Communications Server 2007 R2 Web Service Provider

And instead of me showing how to do it, I would recommend you checkout the blog post Implementing integrated OCS in Exchange 2010 from Chris and Robin’s Technology Blog.

One addition I have to their post is that the reference material for the OCS Web Service Provider can be found on the OCS TechNet site here.

Wednesday, November 11, 2009

Picking files with the mouse using Get-DroppedFile

Sometimes (often?) it is just easier picking your files with the mouse. As long as the files are in one folder that is not that hard, but if you have files scattered all around that is tougher. Also right-clicking and copy-as-path is annoying.

To make this easier, I have created a small drop box function. A small transparent window will be shows and when you drop files on it, those files will be send to the output pipeline where you can do the rest of your processing.

I made the forms part using Visual C#. Relativily trivial. And Add-Type enabled me to embed it into my script. The hard part was making it async so that files would appear ín the output pipeline as soon as they were dropped. I had to resort to good old VB5-style DoEvents (just revealed my age, I guess). If you can come up with a non-polling solution, please let me know.

What it can be used for -

  • Testing scripts with different files
  • Move photos to a folder, converting them as they are moved
  • Renaming files
  • Compressing files
  • continue the list yourself

All tasks where you – the human – can make the decision about what to do with a file are relevant.

With PowerShell v2 being available on all platform, do I have to say, that this is a V2-only script?

Please, read the comments in the script for further information.

Get-DroppedFile.ps1


<#
.Synopsis
Create a drop box window and output the files dropped to the pipeline
.Description
Create a drop box window. When files are dropped, they are send to the output pipeline right away.
Stop Get-DroppedFile by closing the window.
.Inputs
None
.Outputs
File names (-asText), IO.DirectoryInfo or IO.FileInfo objects
.Example
Get-DroppedFile | Copy -destination e:\ -passthru | foreach { $x.attributes=$x.Attributes.ToString()+",readonly" }
Copy dropped files to e:\ and set the readonly bit
#>

param(
   [string]
   # The caption of the drop box
   $Caption,
   [switch]
   # Return file names (full path) as text
   $AsText,
   [switch]
   # Recurse directories, I.E. the folder itself is not returned, only its children
   $Recurse,
   [switch]
   # (Internal switch used to detect -sta invocation)
   $_InternalReinvoked)

# Create the script code - direct execution or SingleThreadedApartment is determined later
$script={

$loaded=$false
try {
    $null=[system.type] "Get_DroppedFile.Form1"
    $loaded=$true
}
catch {}

# The form code. Created in Visual C# 2008 Express and slightly adopted
if (!$loaded) {
    add-type -TypeDefinition @'

using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Drawing;
using System.Windows.Forms;

namespace Get_DroppedFile
{
public partial class Form1 : Form
{
Color defaultColor;
public Form1()
{
InitializeComponent();
defaultColor = this.BackColor;

}

public void avoidWarning()
{
}

private void Form1_DragDrop(object sender, DragEventArgs e)
{
// Back to default color
this.BackColor = defaultColor;
}

private void Form1_DragOver(object sender, DragEventArgs e)
{
// Start dragdrop
e.Effect = DragDropEffects.Copy;
}

private void Form1_DragEnter(object sender, DragEventArgs e)
{
// visual feedback in drag
this.BackColor = Color.FromArgb(defaultColor.ToArgb() - 0x101010);
}

private void Form1_DragLeave(object sender, EventArgs e)
{
// restore color
this.BackColor = defaultColor;
}
}
}

namespace Get_DroppedFile
{
partial class Form1
{
/// <summary>
/// Required designer variable.
/// </summary>
private System.ComponentModel.IContainer components = null;

/// <summary>
/// Clean up any resources being used.
/// </summary>
/// <param name="disposing">true if managed resources should be disposed; otherwise, false.</param>
protected override void Dispose(bool disposing)
{
if (disposing && (components != null))
{
components.Dispose();
}
base.Dispose(disposing);
}

#region Windows Form Designer generated code

/// <summary>
/// Required method for Designer support - do not modify
/// the contents of this method with the code editor.
/// </summary>
private void InitializeComponent()
{
this.SuspendLayout();
//
// Form1
//
this.AllowDrop = true;
this.AutoScaleDimensions = new System.Drawing.SizeF(6F, 13F);
this.AutoScaleMode = System.Windows.Forms.AutoScaleMode.Font;
this.BackColor = System.Drawing.SystemColors.ActiveCaption;
this.ClientSize = new System.Drawing.Size(116, 49);
this.Cursor = System.Windows.Forms.Cursors.Default;
this.MaximizeBox = false;
this.MinimizeBox = false;
this.Name = "Form1";
this.Opacity = 0.75;
this.ShowIcon = false;
this.Text = "Drop Box";
this.TopMost = true;
//this.Load += new System.EventHandler(this.Form1_Load);
this.DragLeave += new System.EventHandler(this.Form1_DragLeave);
this.DragDrop += new System.Windows.Forms.DragEventHandler(this.Form1_DragDrop);
this.DragEnter += new System.Windows.Forms.DragEventHandler(this.Form1_DragEnter);
this.DragOver += new System.Windows.Forms.DragEventHandler(this.Form1_DragOver);
this.ResumeLayout(false);

}

#endregion


}
}

'@
 -verbose -ReferencedAssemblies system.drawing,system.windows.forms
}

# Create our form object
$form=New-object get_droppedfile.form1

# Get rid of any events leftover
get-event get-droppedfile -erroraction silentlycontinue | remove-event

# Add handlers
# The handlers transfer the action/file to the main loop using event
$form.add_dragdrop( { $null=new-event -sourceidentifier get-droppedfile -messagedata $args[1].Data.GetData("FileDrop", $true)  })
$form.add_formclosed( { $null=new-event -sourceidentifier get-droppedfile -messagedata "[close]" })

#Register-ObjectEvent -InputObject $form -EventName dragdrop #-SourceIdentifier blah#
#Register-ObjectEvent -InputObject $form -EventName formclosed #-SourceIdentifier blah

# Custom caption
if ($caption) {$form.text=$caption}

# This is the tricky part. Dropping is quite simple, but feeding the output pipeline with
# the dropped files (so you do not have to wait until the drop box is closed) is not simple.
# I came up with this solution:
# - Do not use ShowDialog as is modal and will suspend PowerShell processing
# - use Show and doevents in a loop
# - this method consumes some CPU, but the Start-Sleep keeps it to a few per cent
# - The Event actions generates events and they are read here in the main loop
# and converted to file names which are sent to the pipeline

# Show the drop box
$form.show()

do {
  $e=get-event get-droppedfile -erroraction silentlycontinue # suppress no such events
$exit=$e.messagedata -eq "[close]" # Test close message
if ($e.messagedata -and !$exit) {$e.MessageData} # Send file to pipeline
if ($e) {$e | remove-event} # Remove event from queue
if ($exit) {break}
start-sleep -m 100 # Wait a little
[System.Windows.Forms.Application]::doevents() # React to form events so the windows can be moved etc.
} while($true)

# Shutdown
$form.close()
} # end of script assignment


# Generate command for -sta recursive call. Done here where $myinvocation has the right value
$command = "&'" + $myinvocation.mycommand.definition + "' -caption '$caption' -_InternalReinvoked"

# Execute the next in a scriptblock so output can be piped
&{

# Forms must run in SingleThreadedApartment style
# Re-invoke PowerShell if necessary
    if ($host.runspace.ApartmentState -ne "sta") {


     write-verbose "Invoking PowerShell with -sta"
        $bytes = [System.Text.Encoding]::Unicode.GetBytes($command)
        $encodedCommand = [Convert]::ToBase64String($bytes)
        powershell.exe -sta -noprofile -encodedCommand $encodedCommand
     #powershell -sta -noprofile $script
    }
    else {
     &$script
    }

} | where {$_} | foreach {
    # Handle the different return options
if ($_InternalReinvoked.ispresent) {
        # -sta call, always return strings
$_
}
elseif ($recurse.ispresent -and (test-path -pathtype container $_)) {
        # Recurse folder tree, return text or objects
Get-ChildItem $_ -recurse -force | foreach {
if ($astext.ispresent) {
$_.fullname
}
else {
$_
}
}
}
elseif ($astext.ispresent) {
$_
}
else {
Get-Item $_
}
}


Have fun

Sunday, November 08, 2009

ATE Schedule for TechEd Berlin

I’m in Copenhagen waiting for my flight to Berlin. If you want to touch base then I have booth duty at the UC Ask-The-Experts (and Dennis ;-) area Tuesday from 15:15 – 18:15 and Friday 11:30 – 14:45.

CU there !

Wednesday, November 04, 2009

Code Contracts and Pex

If your are coding, you should check out this Channel 9 video (11:30 minutes) where Manuel Fähndrich and Peli de Halleux show you how to specify code contracts and testing them using Pex directly in your coding environment.

Here’s a screen shot showing the Contract statements in the code and the Pex test runs below -

image

Happy coding

Tuesday, November 03, 2009

OCS Cumulative Server Update Installer

One of the biggest serviceability issues with OCS 2oo7 R2 hotfixes, has been that you as an admin had a list of 17 (Seventeen – yes) different hotfixes that you had to apply to the correct server/role. This could be done by checking the 13 (Thirteen) different server roles and the list of updates that applied to them and then manually installing each and every of the necessary updates separately.

This “design issue” has now been mitigated with the “Cumulative Server Update Installer”, that basically checks each server, its roles and then suggest and applies the necessary hotfixes to your OCS Server (How hard can it be ;-).

So one installer (and one download, that contains all the hotfixes), that handles all servers/roles and it can be done either through GUI or scripted through Command Line.

You can download the installer from KB968802 that contains a list of all the updates (October Patches), a description of the process for updating and a download link.

At the download site you can download each update, but just scroll down to the download called “ServerUpdateInstaller.exe” and only download this. Do note that you should execute it from an empty folder as it extracts all the necessary updates for the relevant server role on execution (After installation has been completed it removes updates and only logs from each applied update is left).

Below is an example from ServerUpdateInstaller executed on a mediation server:

ServerUpdateInstaller.exe

As you can see it both lists existing installed version, the new updated version and not least a link to the hotfix KB.

Do note that the July Database Update found in KB969834 isn’t installed automagically, this update has to be run manually!

Good work Microsoft (And happy updating to all of you ;-)

Monday, November 02, 2009

PowerShell V2 now Available on Older Operating Systems

Now you can safely forget V1 and switch to the vastly superior V2. As of last week it is now available on -

  • Windows Server 2008 with Service Pack 2
  • Windows Server 2003 with Service Pack 2
  • Windows Vista with Service Pack 2
  • Windows Vista with Service Pack 1
  • Windows XP with Service Pack 3
  • In Windows 7 and WS08 R2 it is part of the package.

    PS2 is part of Windows Management Framework which also includes WinRM and BITS4.

    BTW: You need to install PowerShell 2 on server core. Read how in KB 976736.

    Windows Server 2008 R2 Service and Virtual Accounts

    One of the best reasons for upgrading to R2, is the new account types for managing services. Changing user account passwords being used for running services, scheduled tasks and application pools are often a real pain and consequently, often being skipped. And wouldn’t it be nice if it was handled automatically like a computer account? Well, that is exactly what R2 offers.

    Two new types of service accounts are available in Windows Server® 2008 R2 and Windows® 7—the managed service account and the virtual account. The managed service account is designed to provide crucial applications such as SQL Server and IIS with the isolation of their own domain accounts, while eliminating the need for an administrator to manually administer the service principal name (SPN) and credentials for these accounts. Virtual accounts in Windows Server 2008 R2 and Windows 7 are "managed local accounts" that can use a computer's credentials to access network resources.

    Read the Service Accounts Step-by-Step Guide for more information.

    DFS, IPv6 and – sort of – disabling it

    Ask the Directory Services Team has a good article on troubleshooting DFS links (DFS Referrals and IPv6: Outta site!) as well as a discussion of how not to  disable IPv6 (unbinding it from an adapter) and how to do it correctly (KB929852).

    In case you really need to disable IPv6, consider using a Group Policy Preference or automate it with PowerShell -

    Set-ItemProperty HKLM:\SYSTEM\CurrentControlSet\Services\Tcpip6\Parameters DisabledComponents 0xffffffff -type dword





    Useful information.

    Saturday, October 31, 2009

    Attending TechEd Europe 2009 in Berlin

    Im attending TechEd Europe 2009 in Berlin as an ATE and will be arriving late Sunday, so if you want to connect then give me a ping ;-)

    Two of my colleagues from Inceptio (Per Østergaard and Risto Pedersen) will also be at the TechEd and Peter Ingerslev and I is at the Interact 2009 event on Monday.

    CU there !

    TechEd_Europe_Blog_L_MVPs

    Friday, October 30, 2009

    Executing a PowerShell Pipeline from your own program

    In Executing PowerShell code from within your own program I showed how to execute PowerShell code, but the interface was rather rude as the value had to be converted to string and added to the script code involved. If you have lots of objects to proces, this method is inefficient – a pipeline would be much better as the script source only has to be parsed once and the script would be able to access the objects as real objects.

    I have now taken it a step further and created a piece of code that can execute a pipeline. If you have lots of objects – ‘lots’ depends on the amount of memory you have available – this may not be the most efficient method. All input and output objects will be represented as List<PSObject> giving some overhead. An alternative would be to use a steppable pipeline but for now, I’ll stick to the easy method.

    Here’s the code

    public class PSHost
    {
    private PSDataStreams _InvokePipeline(string Script,
    List<PSObject> inputPipeline,
    out Collection<PSObject> outputPipeline)
    {

    PowerShell PS = PowerShell.Create();
    PS.AddScript(Script);

    outputPipeline=PS.Invoke<PSObject>(inputPipeline);
    return PS.Streams;
    }
    private string _ConvertStreamsToString(PSDataStreams streams)
    {
    string s = "";
    foreach (DebugRecord result in PS.Streams.Debug)
    {
    s += "DEBUG: " + result.ToString();
    }
    foreach (VerboseRecord result in PS.Streams.Verbose)
    {
    s += "VERBOSE: " + result.ToString();
    }
    foreach (WarningRecord result in PS.Streams.Warning)
    {
    s += "WARNING: " + result.ToString();
    }
    foreach (ErrorRecord result in PS.Streams.Error)
    {
    s += "ERROR: " + result.ToString();
    }
    return s;

    }

    public string PSExecutePipeline(string Script,
    List<PSObject> inputPipeline,
    out Collection<PSObject> outputPipeline)
    {

    PSDataStreams ds = _InvokePipeline(Script, inputPipeline, out outputPipeline);
    return _ConvertStreamsToString(ds);
    }
    }











    The script must include $input to pick up the values – if any. The streams collection is returned as a string, so you can check for warning/error/verbose/debug/progress output. Parsing errors will throw an exception; you would use a try-catch to handle that. Naturally, you can change the interface to your own requirements.


    Note that output objects may be real objects with properties or just simple value types. A property value can be retrieved with p.property[“propertyname”].value and a simple with p.BaseObject.



    Example 1 – summing numbers




    // Input pipeline
    List<PSObject> numbers = new List<PSObject>();

    // Add values/objects
    numbers.Add(new PSObject(1));
    numbers.Add(new PSObject(2));
    numbers.Add(new PSObject(3));
    numbers.Add(new PSObject(4));
    numbers.Add(new PSObject(5));

    // Output pipeline
    System.Collections.ObjectModel.Collection<PSObject> outputPipeline;

    PSHost psh = new PSHost();
    string s = psh.PSExecutePipeline("$input | measure-object -sum", numbers, out outputPipeline);
    if (string.IsNullOrEmpty(s))
    {
    foreach (PSObject p in outputPipeline)
    {
    System.Diagnostics.Debug.WriteLine("Sum is " + p.Properties["Sum"].Value.ToString());
    }
    }
    else
    {
    System.Diagnostics.Debug.WriteLine("PowerShell failed: " + s);
    }









    Example 2 demonstrates how to return a value type




    // Re-uses the variable from example 1
    List<PSObject> empty = new List<PSObject>();
    s = psh.PSExecutePipeline("100", empty, out outputPipeline);
    if (string.IsNullOrEmpty(s))
    {
    foreach (PSObject p in outputPipeline)
    {
    if (p.BaseObject.GetType().IsValueType)
    {
    System.Diagnostics.Debug.WriteLine("Number is " + p.BaseObject.ToString());
    }
    else
    {
    System.Diagnostics.Debug.WriteLine("Number is " + p.Properties["Value"].Value.ToString());
    }
    }
    }
    else
    {
    System.Diagnostics.Debug.WriteLine("PowerShell failed: " + s);
    }





    Debug output can be seen with DbgView.



    Have fun

    Thursday, October 29, 2009

    Using ILM for DMZ Account Management

    When you need to manage your users and group in a DMZ, it can be done in several ways. At least these methods can be used -

    • Autonomous. All users and groups are maintained locally in the DMZ.
    • Forest trust. A forest trust is constructed so the DMZ forest trust internal accounts. To restrict.
    • Read-only Domain Controller. A RODC is deployed in the DMZ.

    Besides these, ILM can also be used and when you read the advantages/disadvantages below, you may or may not decide that ILM is the best for your environment.

    Method Advantages Disadvantages
    Autonomous
    • Can be totally isolated
    • No firewall setup
    • No information leak from internal AD
    • Expensive owing to added administration
    • Distributed administration
    • No SSO
    Forest trust
    • Good integration with internal AD
    • SSO with Kerberos
    • Access to DMZ for internal AD objects can be restricted with Selective authentication
    • Complex firewall setup
    • Cannot pick internal objects with GUI picker (unless you open for LDAP from non-DC)
    • Have to change the registry to allow RDP logon on WS03. See KB 902336
    Read-only domain controller
    • A single forest
    • SSO
    • Exposure can be controlled by managing what sensitive properties are on the RODC
    • Windows Server 2008 only
    • Complex control of updates like DDNS
    • Cannot limit users and groups visible
    ILM
    • Simple firewall setup
    • No inbound firewall rules
    • No ILM license if you use IIFP (Enterprise Edition is required). See more below.
    • Single place of management in the internal AD. Everything is pushed to DMZ including password changes. Helpdesk does not need access to DMZ and not even special procedures for changing properties.
    • Full control of what object that are visible in DMZ
    • No leak of other objects (trust cannot be queried)
    • SSO or at least same password
    • Requires an extra infrastructure component
    • SQL license (unless existing can be used)
    • May not have full SSO to all services (re-enter password)

    The seems fine, you say. I know how to do the other stuff, how complex is it to implement the ILM solution? Well, with ILM 2007 you have to create at least some code or get the code from someone who have made it (like Inceptio). But besides this, the rest is standard components. A very rough plan looks like this -

    • Install ILM
    • Install and configure PCNS on the internal AD domain controllers to capture password changes
    • Enable password management in ILM
    • Create an DMZ AD management agent, specifying it as password target
    • Create an internal AD management agent. Specify it as password source and the DMZ AD Management Agent as target
    • Create a management agent that can figure out what objects should be replicated to DMZ (use a group membership, naming convention or some other property). Let this populate an expectedDN property. If the logic is simple, it could be done in an MVExtension. In the solution I have made, I have done it using attribute-value-property files and PowerShell code.
    • Flow the properties you want to keep in sync from the internal AD and export them to the DMZ AD

    With Forefront Identity Manager (FIM) 2010, you should be able to get rid of the coding part, making the solution more attractive. When it comes to IIFP, that is not supported anymore. Microsoft removed the download at some time but made it available again. From my sources at Microsoft, no IIFP version of FIM will be available, so you have to buy it.

    Monday, October 26, 2009

    Executing PowerShell code from within your own program

    In V2 this got a lot easier. Here’s my PSExecute function. The idea is that you send in a text value and a script. The script references the value as $_. All output from the script is returned as the result of PSExecute. Exceptions (e.g. syntax errors), errors etc. are returned as well. Depending on the usage, this may not be what you need, but you can change it yourself.

    I’m planning to use this in combination with ILM. Having the ability to use a piece of PowerShell script in an advanced flow rule or in the MVExtension seems very useful.

    A script could be $_.toupper() etc.

    private String PSExecute(String Script, String InputValue)
    {
    // Create PS outside the function if called multiple times for performance reasons
    PowerShell PS = PowerShell.Create();

    // The script could probably be parsed once to speed things up if the same script is used repeatably
    PS.AddScript("'" + InputValue + "' | foreach {" + Script + "}");
    String s = "";
    try
    {
    foreach (PSObject result in PS.Invoke())
    {
    s += result.ToString();
    } // End foreach.
    }
    catch (Exception e)
    {
    s += "EXCEPTION: " + e.Message; // ToString();
    // just continue
    }
    foreach (DebugRecord result in PS.Streams.Debug)
    {
    s += "DEBUG: " + result.ToString();
    }
    foreach (VerboseRecord result in PS.Streams.Verbose)
    {
    s += "VERBOSE: " + result.ToString();
    }
    foreach (WarningRecord result in PS.Streams.Warning)
    {
    s += "WARNING: " + result.ToString();
    }
    foreach (ErrorRecord result in PS.Streams.Error)
    {
    s += "ERROR: " + result.ToString();
    }

    return s;
    }





    If you want to try it from PowerShell first, here’s the statements -




    $script='$input | fforeach {$_.toupper()}'
    $script='$_.toupper()'
    $ps=[System.Management.Automation.powershell]::create()
    $ps.AddScript('''' + $Inputvalue + ''' | foreach {' + $script + '}')
    $ps.invoke()
    $ps.Streams





    Have fun!

    Getting PowerShell V2 SDK

    Messing around to find it, I’ll make it easier for you!

    Unlike V1, the SDK is not a separate download, but part of Microsoft Windows SDK for Windows 7 and .NET Framework 3.5 SP1. After installing it (you can exclude the Win32 parts), add references to the relevant DLLs found in C:\Program Files\Reference Assemblies\Microsoft\WindowsPowerShell\v1.0.

     

    Have fun

    Tuesday, October 13, 2009

    Exchange Web Services Managed API Release Candidate – Part 2

    Developing the every day Line Of Business Applications (LOBA) often includes handling various types of information. When dealing with custom CRM system you often need to collect and bind typed data sources together. One of these sources is often emails but of its good features i.e. handling unstructured information is also its worst. Fortunately the Exchange Server 2007 SP1 and EWS api provides us with tools and handles to enable us to structure and categorize all types of information in a Exchange Mailbox Store. When making connections between two systems some patterns always emerge. The first is you need an unique item key from the source system to store in the destination system to make a one way connection. This strategy has a flaw because when the source item is deleted you need to remove the reference key in the destination. In this case people tend to delete items though outlook and not your LOBA and outlook does not contact you LOBA to tell you have to delete the reference. So the before mentioned strategy flaw was the tight coupling of information between the source and destination.

    So let me suggest another solution. Lets say you what to connect you LOBA with a item from a Exchange Mailbox. Instead of storing the Exchange Item ID in you LOBA try instead tag the Exchange Item with the item id from you LOBA you want to connect with. Then from the LOBA, search the Mailbox for items tagged with the LOBA item id. In this way when some elements has been deleted you do not end up with lost keys because the search will not return items that has been deleted. For making this work we need to handle to things: Item Extended Properties and the ability to search for these items.

    Working with extended properties

    To illustrate how to work with extended properties I will make use of an email object and store it in the default Innbox

       1: //Setup the request. es is an instance of ExchangeService



       2: EmailMessage em = new EmailMessage(es);



       3: em.Subject = "Hello World!";



       4: em.Body = "Your base belongs to us!";



       5: //Creating the Extended property CustomerID as a type of string



       6: ExtendedPropertyDefinition epd = new ExtendedPropertyDefinition(DefaultExtendedPropertySet.Common, "CustomerID", MapiPropertyType.String);



       7: //Give the property a value and assigning it to the email



       8: em.SetExtendedProperty(epd, "1234");



       9: // Store the message in the inbox



      10: em.Save(WellKnownFolderName.Inbox);




    Searching form Extended Properties





       1: //Defining a view/windows into the search result.



       2: ItemView view = new ItemView(50);



       3: //Telling that i am only interested in the id of the item.



       4: view.PropertySet = new PropertySet(BasePropertySet.IdOnly);



       5: //Creating a extended property definition to search for.



       6: ExtendedPropertyDefinition epdCustomerID = new ExtendedPropertyDefinition(DefaultExtendedPropertySet.Common, "CustomerID", MapiPropertyType.String);



       7:  



       8: //Variable for storing the searchresult



       9: FindItemsResults<Item> foundItems;



      10:  



      11: do



      12: {   



      13:     //Search the inbox for at item with a extended property value stet to 1234



      14:     foundItems = es.FindItems(WellKnownFolderName.Inbox, new SearchFilter.IsEqualTo(epdCustomerID, "1234"), view);



      15:     foreach (Item current in foundItems)



      16:     {



      17:         //Getting the email message from the Exchange Service via the item id.



      18:         EmailMessage email = EmailMessage.Bind(es, current.Id) ;



      19:         Console.WriteLine("Message found. Subject: {0} - Body: {1}", email.Subject, email.Body);



      20:     }



      21:     //Offset the index of the view/windows with 50. This gives the ablity to page through larger resulset.



      22:     view.Offset += 50;



      23:     //while there more result available...



      24: } while (foundItems.MoreAvailable);




    Other cool things



    The item id can still be very valuable for your LOBA. If you store the id in you LOBA you can always find the item again even if a user decides to move the message to another folder. By using the EmailMessage.Bind method with the item id you will always get the message returned. Same thing goes for folders they also have a id. So even if the users decide to reorganize the folders with the folder id you can still find the folder again.

    Tuesday, October 06, 2009

    Exchange Web Services Managed API Release Candidate – Part 1

    For some time now developers have been able to access Exchange Server 2007 via Exchange Web Services(EWS). To get this working you basically added a webreference in Visual Studio and constructed a request pending upon the type of function you wanted to execute. EWS was a great step away from the various message api(CDO, MAPI, WebDaV) which had their own strength and weakness. But still EWS can be laborious and you often end up building your own stub upon the EWS webreference. Through my teaching of Developing for Microsoft Unified Communications R2 i got to know a new API called Exchange Web Services Managed API. In short this is the way – this is the api for programming for Exchange Server 2007 with sp1. In the next few blogs I will run through some of the basics for this new API. As a first sending a email would be a place to start.

    Sending email with EWS(c#)

       1: // Setup the request



       2: MessageType emailMessage = new MessageType();



       3:  



       4: // Set the To email address



       5: emailMessage.ToRecipients = new EmailAddressType[1];



       6: emailMessage.ToRecipients[0] = new EmailAddressType();



       7: emailMessage.ToRecipients[0].EmailAddress = "email@receiver.somewhere";



       8:  



       9: // Set the From email address



      10: emailMessage.From = new SingleRecipientType();



      11: emailMessage.From.Item = new EmailAddressType();



      12: emailMessage.From.Item.EmailAddress = "email@sender.somewhere";



      13:  



      14: // Set the Subject



      15: emailMessage.Subject = this.textBoxSubject.Text;



      16:  



      17: // Set the Body



      18: emailMessage.Body = new BodyType();



      19: emailMessage.Body.BodyType1 = BodyTypeType.Text;



      20: emailMessage.Body.Value = "Something to write about";



      21:  



      22:  



      23: // Create request object and set properties



      24:  



      25: // Create the CreateItemType in order to create and send the email



      26: CreateItemType request = new CreateItemType();



      27: request.Items = new NonEmptyArrayOfAllItemsType();



      28: request.Items.Items = new ItemType[1];



      29: request.Items.Items[0] = emailMessage;



      30: request.MessageDisposition = MessageDispositionType.SendAndSaveCopy;



      31: request.MessageDispositionSpecified = true;



      32:  



      33:  



      34: // Call CreateItem and process response



      35:  



      36: // Finally, call the CreateItem method to create and send the email



      37: CreateItemResponseType response = _service.CreateItem(request);



      38:  



      39: // Analyze the response message for success/failure



      40: ItemInfoResponseMessageType responseMessage = response.ResponseMessages.Items[0] as ItemInfoResponseMessageType;



      41:  



      42: if (responseMessage.ResponseClass == ResponseClassType.Success)



      43: success = true;



      44:  



      45: MessageBox.Show("Email send succeeded: " + success);




    Sending email with EWS Managed API(c#)





       1: //Creating service binding for a Exchange Server 2007



       2: ExchangeService es = new ExchangeService(ExchangeVersion.Exchange2007_SP1);



       3:  



       4: //Assigning credentials for accessing the sender account



       5: es.Credentials = new System.Net.NetworkCredential("sender@email.somewhere", "SuperSecret#1");



       6:  



       7: //using autodiscover to find righ Exchange From End



       8: es.AutodiscoverUrl("pli@inceptio.dk");



       9:  



      10: //Creating a email message



      11: EmailMessage em = new EmailMessage(es);



      12: em.Subject = "Hello World!";



      13: em.Body = "Using EWS Managed API";



      14: em.ToRecipients.Add("receiver@emai.somewhere");



      15: em.SendAndSaveCopy();