Thursday 29 March 2012

Implementing iSCSI Storage with Windows Failover Clusters


Implementing iSCSI Storage with Windows Failover Clusters

Introduction Historically speaking, computer clusters conjure up notions of tightly coupled servers attached to high speed SANs in a data center type environment.  These clusters have existed for many years primarily in the enterprise class space due to their high initial investment costs.  In addition, the technology associated with these SAN-based clusters requires a close proximity of the hardware due to fibre cable length restrictions.  With the high initial cost and the close proximity requirement, clusters have typically been confined to corporate data centers.



This has all changed with the advent of iSCSI technology!  As you may know, iSCSI stands for Internet Small Computer System Interface.  It is a transport protocol for mapping block-oriented storage data over TCP/IP networks.  This allows clusters to take advantage of existing networks to access shared storage instead of using expensive SAN environments. In addition, by using the network, servers and storage can be located anywhere on the LAN instead of being restricted to the close proximity of a data center.  Small and medium sized businesses can now leverage iSCSI technology to implement clusters.
This article discusses how Microsoft has embraced iSCSI technology and now supports Windows Failover Clusters using it to access shared storage.  The article explores the different iSCSI components and how they are implemented with clusters.  Screen shots are provided to illustrate how to setup iSCSI-based disks and configure them for use with Windows Failover Clusters.

iSCSI Failover Clusters

In order to implement iSCSI technology with Windows Failover Clusters, you must first be familiar with the iSCSI terminology.  The term “target” is used to refer to a network storage device similar to a LUN on a SAN.  The term “initiator” is used on the client-side when a network connection is established to a target.  So by creating these targets on an iSCSI-based storage array, you can configure the client to initiate persistent network connections to the disks each time the server boots.
Setting up the target LUNs is the first step.  How to accomplish this depends on the storage vendor’s configuration program.  No matter who the vendor is, a method will be provided to create a target storage device and establish various properties.  For failover clusters, the target must be configured to allow multiple connections to the same device.  In the following example, StarWind software is used to create several iSCSI targets that currently have connections to 2 different servers.
iSCSI technology: setting up target LUNs
Once the targets are created, each of the cluster servers must initiate a connection to them.  This is accomplished with Microsoft’s iSCSI Initiator software.  The software is included with Windows 2008 or can be installed separately with earlier versions of the operating system.  A simple configuration program is used to specify which target portals to listen to (iSCSI storage arrays), and which iSCSI targets to connect to (LUNs).  In the following example, you can see the Microsoft iSCSI Initiator used to detect a new iSCSI target called iscsi-new with an inactive status.
iSCSI technology: Microsoft iSCSI Initiator
Using the “Log on…” button, you can configure how the initiator attempts to connect to the target.  You can specify which local adapter to use, the source IP address, and the target portal.  There are also several security related settings including IPsec.  Once you connect to the target, the storage device can be accessed just as you would with a locally connected disk drive.  The following example illustrates the various properties that can be configured when connecting to an iSCSI target.
Microsoft iSCSI Initiator: connecting to an iSCSI target





The next step is to use the Disk Management utility to bring the new iSCSI disk online, and initialize the disk by writing a signature on it.  The drive then needs to be formatted with the NTFS file system and optionally assigned a drive letter.  In the following example, you can see the new iSCSI disk has been initialized and formatted.
Microsoft iSCSI Initiator: disk management utility
Finally, use the Failover Cluster Management snap-in to add the new storage so the cluster can use it.  This is accomplished by highlighting the Storage container in the left pane, and right-clicking to specify “Add a disk”.  You will be prompted which disk to add as illustrated in the following diagram.  The newly added storage can then be used by applications to access shared iSCSI disks.
Failover Cluster Management snap-in

Summary

Windows failover clusters can now take advantage of iSCSI storage to access shared data.  Using iSCSI reduces the initial investment costs for clusters by allowing you to use your existing TCP/IP networks instead of expensive SANs.  It also allows servers and storage to be located anywhere on the LAN instead of being restricted to the confines of a data center.  The lower cost of ownership and flexible implementations make iSCSI clusters an attractive alternative for small and medium sized businesses.

How to Install SQL Server 2008


How to Install SQL Server 2008

SQL Server 2008 is relatively easy to install, but it does take a little knowledge of the process and a little planning.  For most shops the planning phase can be minimal but there will be instances (like clustering) when you’ll need to plan quite a bit.  This will not be one of those cases.  Today we’re going to discuss a straight-forward SQL Server 2008 install.  Unlike some of the other Microsoft products there are a lot of screens to go through when installing SQL Server 2008, but most of them aren’t that bad.  You just need to know what choices to make.

First of all you need to make sure your system meets the minimum requirements for the version of SQL Server 2008 you’re installing.
Assuming that you’ll be installing this on a production system it’s important to know that there is at least one system reboot required for this install.  That’s because the first thing this install will do is upgrade your version of Windows Installer to 4.5.  If you’ve already got Windows Installer 4.5 then setup will not require the reboot.  Next it will install version 3.5 of the .Net framework.  On most boxes this shouldn’t require a reboot.  I tell you this because if you’re installing SQL Server 2008 on a current production box that, then you’ll need to plan ahead for the reboot and perhaps do it a few days before you install SQL Server 2008 so you’ve got it out of the way when the time comes.



Ok, now that we’ve taken care of the preliminaries, let’s install SQL Server 2008.
  1. The media will autorun and present you with this screen.

Notice how much info Microsoft gives you right away.  Over on the left there are menu choices and the options on the right change as you go through the menus.  You want to click on the Installation menu on the left to be presented with this screen.

2.  You can see there are many options here, but in this case the only one that makes sense is to click the top option for a stand-alone install.  Click that option.
3.  Once you choose the stand-alone install, setup will install the setup support files.  It does this every time you run the installer so don’t be surprised if you run it again and again and it installs them every time.  At this point setup will also run some checks to make sure that your system is ready to run through the install process.  If it’s not and something fails, you’ll see an error similar to the one below.  In my case I have a reboot pending and setup will not continue until I reboot.

If you don’t have any problems with the checks, the screen will be all green like this:

I might also add that if you have to reboot, once your server comes back up setup should continue on its own and re-run this check and if all errors have been eliminated, it should present you with the above screen just as if there had never been a problem.
4.  Click OK.  Next you will be presented with the setup support rules page.  This is another set of checks that tells you whether things will complete and work properly once setup completes.  The screen looks like this.

And again, warnings can be ok, but if there are any errors you’ll need to resolve them before moving forward.  I’ll talk more about the kinds of things that can go wrong with setup in another article.
5.   Now it’s time to choose the installation type.  Are you installing a new instance of SQL Server 2008, or adding to an existing instance?

6.   Now enter your product key if you need to.  There are times when it’ll be filled in automatically for instance if you’re installing an MSDN version.

7.   Next you need to agree to the license terms.  For the sake of completion I’m going to go ahead and say that you should read all of the license terms so you know what you’re agreeing to.  However, assuming you agree and actually want to install SQL Server 2008, then check the box and click next.

8.  This is one of those really important parts of the installation:  choosing features to install.  Unfortunately, since I don’t know your goal I can’t really offer very much guidance here, but I can tell you that you’ll need to install Database Engine Services at a minimum if you want to run a SQL Server 2008 database on the server.  Everything else is optional, but it is a good idea to install the documentation and the management tools.  If you’re not sure whether you should install a feature or not there are two bits of advice I can offer.  First, as you click on each option, a description comes up in the right pane that tells you what it’s all about so that can help you decide.  And second, installing everything won’t hurt your server so if you’re not sure, then it’s cool to go with a full install.  It’s still always best to snipe the features you’re after though.

You’ll notice that some features are grayed out.  This is because I’ve already got SQL Server 2008 installed on my box, and some features can only be installed once across all instances.  It doesn’t do any good to install multiple copies of the documentation, for example.



9.   This next screen allows you to define whether you want to install a default or a named instance.  You can of course only install one default instance, and all others must be named.  As you can see here, I’m installing a named instance because I’ve already got a default instance on my box.

10.   This screen outlines the disk space requirements for all the options you’ve chosen.  There’s nothing really to do here and I’ve never seen it fail since nobody puts a database on a server that can’t even hold the install.

11.   Now you need to configure your service accounts.  You can select the same account for all services or separate them into different accounts.  And while detailed advice on that is beyond the scope of this article, I can say that either way is usually ok.  The one piece of advice I can give you though is to make it a domain account instead of just a local Windows account.  And there are special permissions that these accounts need so you should make sure they have those rights.

12.  This screen takes a little explanation.  It’s asking you which security model you want to run.  It defaults to Windows authentication which should be fine for most shops.  The deciding factor is whether you’ve got non-windows domains, or if you’ve got regular windows domains that don’t trust each other.  This could be a situation where you’ve got external customers hitting your database and you don’t want to give them windows access, but rather access through some sort of portal.  All the same, it’s best to use Windows authentication if your users have Windows accounts.  It’s more secure and it’s easier for the users to connect to the database.  This screen also wants you to define default data directories.  You can change these at any time once SQL Server 2008 is installed, so you can just accept the defaults here if you like and then change it later if you need to.

13.   This screen gets overlooked by a lot of people.  It’s really a good idea to check both of these boxes.  There’s absolutely no personal or identifying information sent to Microsoft and checking these boxes allows them to get automatic error and usage information so they can improve the product.  And contrary to popular belief, they actually do look at every error report that comes across.  So check both of these.  It costs you nothing and it helps improve the product.

14.  You’re going to learn to hate this screen.  Here we have another set of checks that can stop your install and in my experience, most of the stuff that’s going to stop you will happen here… after you’ve already been through most of the screens.  This topic is too detailed to go into right now, but just know that you can expect some showstoppers here if there are going to be any.  The good news is that you mostly get stopped on upgrade so if you’re installing a fresh instance you’ll probably be ok.  So once you get all greens you can go ahead.




15.   Assuming you’ve made it this far you should be free to hit the Install button.  There’s nothing really notable about this screen; it’s just a summary of all the options you’ve chosen.  If you like you can look it over and to make sure that you didn’t do anything stupid, but I’m usually ready to just start installing at this point so I just bull on ahead.

OK, you’ve just gone through the setup wizard to install SQL Server 2008.  As you can see it’s fairly straight-forward but there are a lot of screens to click through.  This is a much bigger topic than I could ever cover in a single article, but starting with a simple wizard-driven install is a good place to get your feet wet.

Monitoring Your Database with SQL Profiler


Monitoring Your Database with SQL Profiler


SQL Server Profiler is a flexible profiling tool that comes with SQL Server. Profiler allows you to monitor and capture a record of the activity going on in your database, including ad hoc queries, stored procedure requests, logins, errors, and much more.  Today we will walk through an example of one way to use Profiler as we introduce the tool.Microsoft SQL Server 2012 enables a cloud-ready information platform that will help organizations unlock breakthrough insights across the organization, as well as quickly build solutions and extend data across on-premises and public cloud, backed by capabilities for mission critical confidence. 
Let’s say that we have an application named SOCK that runs against our database SOCKsql, on the SwampTest server.  We’d like to see what T-SQL queries hit the database when a user logs into SOCK. We’ll need to:
  1. Start SQL Server Profiler and set up the trace
  2. Run the trace while we perform the SOCK application login
  3. Stop the trace and examine the captured data

Start SQL Server Profiler

You can start SQL Profiler from the Start menu, or from within SQL Server Management Studio (SSMS) under the Tools menu. (You can also use Start > Run > Profiler.)  On startup, Profiler will present you with a Connect to Server dialog box; enter the name of the server you wish to trace - and your authentication details - just as you would for SSMS, and click Connect.
SQL Server Profiler: Trace Properties Window
The Trace Properties window allows you to tailor the trace to your needs. On the General tab, you can enter a Trace name, choose a trace template, select save options, and enable a trace stop time. Today we’re only running a short trace that we don’t intend to save, so we’ll keep the default options.
SQL Server Profiler: Trace Properties, Events Selection
On the Event Selection tab we choose what database events, and which attributes from each event we would like to capture. The events that are already selected are part of the “Standard” template that we saw on the General tab. (Note that checking and unchecking boxes on this screen will not affect the template.)
Near the bottom right of the screen, there are two checkboxes. “Show all events” displays all the possible events we can trace using Profiler. Take a quick look at all the categories available, then uncheck the box.
Check the box marked “Show all columns” so we can see all the available attributes for each event.
  • We don’t really need the Audit Login and Audit Logout events today, so uncheck them.
  • You will need the “ExistingConnection” event 99% of the time; without it, any action performed by an existing connection will not show up in your trace.
  • RPC: Completed stands for “Remote Procedure Call: Completed”. The SOCK application most likely makes use of RPCs, so we will leave that checked.  Additionally, scroll right and select “Database” for the RPC:Completed event. We’ll see more about this later.
  • SQL:BatchStarting and SQL:BatchCompleted denote the beginning and end of a group of T-SQL statements running against the server. These can come in handy for delving into specifics, but again, we’re really only interested in the procedure calls today. Uncheck these.
Let’s also use column filters to filter out trace data that doesn’t apply to our needs. Click Column Filters, and in the Edit Filter dialogue we select DatabaseName, then click on “Like” and enter the name of our database: SOCKsql. This assures that Profiler will only capture events that are happening to the SOCKsql database.
SQL Server Profiler: Edit Filter

Run the Trace

We were very selective with the events we chose on the last screen, and we’d also like to limit the time that the trace runs, so that we’re not overwhelmed with information we don’t need, and the burden on the server is lighter than a heavy trace would be.
When we’re ready to log into SOCK, then we click the Run button in SQL Profiler. We can see “Trace Start” at the top of the event list in Profiler, followed by a series of “ExistingConnection” rows.
We only want to capture the events around the SOCK login, so as soon as the trace begins, we log in to the SOCK application, and then click the red “stop trace” button in SQL Profiler. We now have a table of traced events.
SQL Server Profiler: Trace Results

Read the Trace

If we wished, we could save these events into a file using File > Save As > Trace File, or as a SQL table (File > Save As > Trace Table).  We particularly like saving trace data to a SQL table, as it allows us to search for specific text quickly (using SELECT * FROM tableName WHERE textdata LIKE ‘%text%’), or get the top 10 most expensive queries in terms of reads (using SELECT top 10 * FROM tableName ORDER BY Reads DESC). But again, today, we only need to see what stored procedures are being called.




You can now scroll through the traced events, or search for a keyword or number in any column.  As you click on each row, the TextData for that row displays in the viewing pane at the bottom of Profiler.  And remember that we filtered the data by database name; you can verify that for yourself by scrolling right to see the Database Name column.

How to Create and Edit a Report in SQL Server Reporting Services


How to Create and Edit a Report in SQL Server Reporting Services

SQL Server 2005, 2008 and 2008 R2 all come packaged with SQL Server Reporting Services (SSRS) - an enterprise-level reporting solution. With SSRS, you can create, schedule, publish, and manage a multitude of reports using a variety of data sources.  In the following sections, we will create, edit, and export a SSRS report manually.

Add an SSRS Report to a Project in BIDS

To create a report manually in Business Intelligence Development Studio (BIDS):
  1. Open BIDS.  Select File > Open > Project/Solution.
  2. Locate and open the previously created Report Server Project.
  3. In the Solution Explorer, right-click the Reports folder and select Add > New Item.
  4. Select "Report" from the Add New Item window, and name the report (e.g., "MyReport.rdl"). Click OK.


You now have an empty SSRS report open in BIDS.
Business Intelligence Development Studio: Creating SSRS Report
On one side of the screen, you should see tabs for Toolbox and Report Data. If either of these tabs is not visible, select the appropriate tab from the View menu.  The Toolbox tab displays the items available to your report, such as textboxes and tables.  The Report Data tab displays the available data elements, which can include result sets from your data sources, images, and built-in report fields (like "Page Number").

Add a Data Source and Dataset to the Project

A data source identifies the database (or other data object, such as an XML file) from which you wish to pull data, and the connection properties used to connect to it - such as server name and login credentials. This example uses the previously created SQL Server data source.  To add a data source to your report:
  1. On the Report Data tab, select New > Data Source. Give the data source a meaningful name.
  2. Select the "Use shared data source reference" radio button.
  3. Select the previously created SQL Server data source from the drop-down menu. Click OK.
SSRS Report: Add Data Source
A dataset is the specific set of data requested - via a query or stored procedure execution - from the data source.  The dataset defines what data is available to display in the report.  To add a dataset to your report:
  1. On the Report Data tab, select New > Dataset. Give the dataset a meaningful name.
  2. Select the data source you added in the steps above.
  3. This screen allows you to type or paste your query or stored procedure call; or you can click "Query Designer" to build your query in the graphical design.  Paste the following query in the Query text area:
  4. SELECT P.Name ,
    ProductNumber ,
    Color ,
    ListPrice ,
    SC.Name [Category]
    FROM Production.Product P
    LEFT OUTER JOIN Production.ProductSubCategory SC
    ON P.ProductSubCategoryID = SC.ProductSubCategoryID
    WHERE ListPrice BETWEEN 0 AND 1000
    ORDER BY Category, ListPrice ASC
  5. Click OK.
You can now see the data set and available data fields in the Report Data tab.

Add Data Elements to the Report

Now that you have defined your dataset, you can begin to build the report itself, starting with a table of products, title, and page number:
  1. Drag a Table item from the Toolbox tab onto the surface of your report.
  2. From the Report Data tab, drag and drop the data fields Category, Color, Product, Name, and List Price into columns in the empty table.
  3. SSRS Report: Add Data Elements
  4. Drag a Textbox item from the Toolbox tab onto the surface of your report, above the table. Type "AdventureWorks Products" in the textbox.
  5. On a blank area of the report, right-click and select Insert > Page Footer.
  6. Drag a textbox item into the footer. From the "Built-In Fields" section on the Report Data tab, drag "Page Number" into the textbox.
  7. Click the Preview tab above the report area to see the report populated with data.

Format the Report and Export

The Design tab allows many options to improve the report presentation, including:
  • Add and delete columns - Right-click the table and select Insert Column > Left (or Right).
  • Change the column widths - Drag and drop column header borders.
  • Change text font and style - Select the fields and apply changes from the format toolbar.
  • Format numeric fields - Right-click the numeric field, select "Text Box Properties", and make changes using the Number option (for example, List Price might be displayed as Currency).
  • Change the report area - Drag the edges of the report to make it larger or smaller, or drag the border of the footer to move it closer to the end of the table.
The size of the report area may impact the way the report is displayed when you export it. For example, if you export to PDF at standard size (8.5 x 11") portrait layout and your report is too wide, the report will contain extra pages to accommodate the extra width.  Use the ruler (right-click and select View > Ruler) to monitor the report width, and modify the report page layout and borders as needed in Report > Report Properties.




To export the report to PDF, Excel, or other file format:
  1. Click the Preview tab.
  2. Click the "Export" icon above the report.
  3. SSRS Report: Formatting and Exporting
  4. Select your preferred file format.
  5. Select the location and name for your file. Click Save.
You have created, formatted, and exported a SSRS report

Scripting Database Objects in PowerShell


Scripting Database Objects in PowerShell


One of the most important tasks a DBA can perform is to make sure he can recover from unexpected disasters.  And while backup and restore is usually the method chosen for recovery, scripting the database schema can be a very powerful tool.  Quite often a user will drop a stored procedure, or change a view by mistake and you don’t want to restore the entire database just to get back a single piece of code. 

The problem is that scripting database objects for a single server is a manual process for the most part and keeping up with schema changes for a single server can be hard enough, but for an entire enterprise it’s nearly impossible. There are some third party tools that can help, but most of them don’t have command lines that are rich enough to handle such a process.  This is where powershell comes in very nicely.  With powershell you can easily script all the objects in your database manually, or schedule the scripting anytime you'd like.  And you can do it for as many of your servers as you'd like.
Let’s start by scripting all the tables in a database.  For our purposes we’re going to connect to a database and navigate to the ‘tables’ node.
>cd SQL:\SQLServer\localhost\default\databases\MyDB\Tables
Now that we’re connected to the ‘tables’ node we can script all of the objects.
Dir | %{$_.Script()}



It’s just that easy.  Now I’ll go ahead and explain the code.  First you start by pulling a list of tables with the ‘dir’ command (you can also use ‘gci’).  Then you pipe ‘|’ the results of that command to the foreach loop command.  Here I’m using the alias for the foreach command, which is ‘%’.  Everything that goes inside the foreach (%) loop is surrounded by  the curly brackets {}.  Once inside the foreach construct, you call the script method for each object encountered.  The ‘$_’ is the built-in iteration variable in powershell.  So for each iteration of the foreach loop the ‘$_’ represents the one currently in the loop.  So what you’re saying here is foreach object, call its script method.  The script method is overloaded and can take a parameter but we won’t discuss that until the next article.
Now that those basics are out of the way, let’s see how we’d script out the stored procedures in the same database.
>cd SQL:\SQLServer\localhost\default\databases\MyDB\StoredProcedures
>Dir | %{$_.Script()}
Yep, that’s right.  The method you use to script the objects doesn’t change.  Only the node you’re in changes.  And while you could do some fancy tricks to change nodes, the easiest way to script all the objects in your database is to just repeat the above steps for each node you want to script.  It would look something like this is practice:
>cd SQL:\SQLServer\localhost\default\databases\MyDB\Tables
>Dir | %{$_.Script()}
>cd SQL:\SQLServer\localhost\default\databases\MyDB\StoredProcedures
>Dir | %{$_.Script()}
>cd SQL:\SQLServer\localhost\default\databases\MyDB\Views
>Dir | %{$_.Script()}
Of course, all this does is print the output to the screen and that’s only going to be minimally useful at best.  What you really want to do is save the scripts to a file so you can actually restore them if you need to.  So we’re going to modify the script just a little so we can save it to a file.
>dir | %{$_.Script() | out-file c:\Tables.txt -append}
What this is saying is that once you have the object scripted, pipe the output to a file located at c:\Tables.txt and append the output to the end of the file.  If you don’t use the ‘-append’ flag the file will be overwritten for each object and you’ll only get the last object in the file.
Now, let’s suppose that you wanted to script only those objects in a certain schema.  For our purposes let’s say that you only want objects in the HR schema.  All you have to do is pass the the ‘Dir’ command to a where-object to limit the results before scripting them.  It would look like this:
>dir | %{$_.Script() | where-object{$_.Schema –eq “HR”} | out-file c:\HRTables.txt -append}




Turning this into a process that runs on multiple boxes is relatively simple.  There are a number of ways you could do this, but for our purposes I’ll choose the easiest to follow.  We’re going to be putting our list of servers into a text file and cycling through them one at a time.  So create a new text file and put your servers inside.  Just put one on each line like so:
Server1
Server2
Server3
Now all you have to do is put create a foreach that will cycle through each server in the file.  Here’s the entire line of code.
>get-content c:\Servers.txt | %{cd SQL:\SQLServer\$_\default\databases\MyDB\Tables;  dir | %{$_.Script() | out-file “c:\$_Tables.txt” -append}}
It’s just that easy.  The only changes I made to the original code were to replace the server name with the ‘$_’ variable that represents the current server in the loop coming from the Servers.txt file, and enclosing the out-file filename in double quotes so I can prefix the filename itself with the ‘$_’ variable so that the server name is depicted in the filename itself.  Also, the backtick (`) is necessary because it’s an escape character.  It tells powershell that the ‘$_’ is the variable name and not ‘$_Tables’.
Unfortunately, as it stands, the code will fail if you just paste it into SSMS to run.  I'll explain why and solve the problem in the next article:  Scripting database objects with advanced scripting options in Powershell.

Scripting Database Objects with Advanced Scripting Options in PowerShell


Scripting Database Objects with Advanced Scripting Options in PowerShell



In the last article, Scripting Database Objects in PowerShell, I showed you the basics of scripting your database objects so you could schedule it to run anytime you'd like, and run it against multiple servers.  In this article, I’m going to show you how you can set different scripting options by using SMO to pass a parameter to the overload of the script method.

Typically when you script database objects in SSMS you use the scripting wizard, which provides you with different options for your scripts.  Here’s a screenshot of the options window from the SQL Server 2008 R2 scripting wizard:
SQL Server 2008 R2 Scripting Wizard
Scripting objects in PowerShell would be pretty useless if you couldn’t get the same detailed level of control over your scripts as you can through the wizard though.  And this is where SMO is going to come into play.  In powershell, we’re going to start by loading the SMO provider like this:
>[reflection.assembly]::LoadWithPartialName(“Microsoft.SqlServer.Smo”)

If you loaded the SMO assembly correctly, you’ll get a confirmation that looks like this:
SQL Server SMO
Next, you need to create a new variable and give it the ScriptingOptions type.  To do this you need to create a new object of that type like this:
>$so = new-object Microsoft.SqlServer.Management.Smo.ScriptingOptions
Now that you’ve got the assembly loaded and a variable created as a ScriptingOptions type, you can begin to use your new variable and set its properties.  We’ll start by simply inspecting the properties.  You can do this by simply typing the name of the variable to print its contents to the screen.
>$so
The results will look like the screenshot below, and you’ll notice the same options from the screenshot of the wizard above.
SQL Server SMO
And of course there are many more options than are in this screenshot.  Anyway though, the options you see above are only the defaults.  You can change them at will, and that’s what we’re going to do.  For our purposes we’re going to change 3 of the properties like this:
>$so.IncludeDatabaseContext = 1
>$so.ScriptDrops = 1
>$so = NoFilegroup = 1
Those options are pretty easy to understand, so I won't go into detail here about them.  And now that we’ve got the options set that we want, we can use them in our scripting command from the last article like this:
>dir | %{$_.Script($so) | out-file c:\Tables.txt -append}
Notice that all I did was put the ‘$so’ inside the () for the script method call.  For that portion of it, there’s really nothing more to it than that.  The process isn’t perfect though.  SMO itself doesn’t handle the commands perfectly, so some properties don’t seem to have any effect on the outcome of your script.  A good example is the ‘ScriptBatchTerminator’ property.  Setting this property to true is supposed to script a ‘GO’ statement between each of the objects, but it doesn’t.  In fact, it doesn’t do anything that I can see.  So what you end up doing is writing the GO statement yourself.  Remember the script for the StoredProcedures from the last article?  Well it actually won’t run.  See, a create statement has to be the first statement in a batch which means that without the GO statement, everything after the first object will cause an error.  This is an easy problem to solve.  Now, we shouldn’t have to do this, but it is what it is, right?  Here’s how you fix this issue:
>dir | %{$_.Script() | out-file c:\Tables.txt –append; “GO” | out-file c:\Tables.txt -append}
Did you see what I did there?  After the first ‘-append’ I put a semi-colon, which terminated the line.  It allows me to put multiple commands on the same line.  We use this same trick in T-SQL all the time.  Then I hardcoded the text “GO” and piped it to the same Tables.txt file with another ‘-append’ flag.  So all I’m doing is after each script, I’m printing a GO statement on its own line.  This is what SMO was supposed to do for us, but didn’t.  So you have to be careful sometimes and test that the SMO provider is doing what you expect it to do.  Most of the time it’ll serve you well though.



Other than scheduling, one of the biggest advantages you get out of scripting your objects in powershell is the ability to control the order in which they’re scripted.  Suppose you have objects that rely not only on objects in the current database, but perhaps they rely on objects in another database as well.  You can script those objects in the order they’re needed so none of your scripts fail.  Until now we’ve talked about scripting your objects for recovery, but this is very useful when you’re trying to create a schema-only copy of your database for a dev or test server.  And truth be told, I really don’t like scripting all of my objects into a single script anyway.  If I’m scripting an entire DB to put onto another server I might consider it, but for my normal recovery-level scripting I prefer to put everything into its own file.

Creating an Enterprise Scripting Template in PowerShell


Creating an Enterprise Scripting Template in PowerShell


What we’ve done in the previous articles (Scripting Database Objects in PowerShell and Scripting Database Objects with Advanced Scripting Options in PowerShell) was to set the stage for this final piece that will make everything infinitely more usable.  Don’t get me wrong, those other skills come in handy from time to time and are useful to have.  It’s also useful to have them as a frame of reference to be able to graduate the examples so they’re easier to understand.  So what we’re going to talk about this time is taking everything we’ve learned so far and using it to build a scripting process that’s easier to sustain in the different areas you’re going to need it in.  And I suppose it would be good to start with defining what ‘easier to sustain’ actually means so we’ll know what we’re trying to achieve.


In this case, we’re trying to meet some simple criteria:
  1. Make it easier to manually recover a single object.
  2. Make it easier to automate the recovery of a single object.
  3. Make it easier to compare versions of an object from different days.
  4. Make it easier to allow users to recover their own objects.
  5. Make it easier to recover a specified group of objects.

We’ve got this list of 5 basic things that we want to accomplish.  There could easily be some more discrete tasks we might want to perform as well, but I think they’ll all be derivatives of these.  So how are we going to make this happen?  Well, we’re not going to change the process all that much.  All we’re going to do is change it so that each object is scripted to its own file, and possibly even by date.  This is what’s going to allow us to keep a version history by date as well as easily restore a single object or even compare different versions of objects.  If you store everything in a single script, then you’ve got to search through the file to find the object in question and then copy and paste it into an editor to restore it.  And it would be next to impossible to reliably automate the recovery or to allow users to recover their own objects.  So by putting everything in its own file, you ensure that the object is easy to find and is easily runnable because it has no other code to watch out for.
OK, no more talking… let’s code.
We’re going to start off with our original code for reference.
>[reflection.assembly]::LoadWithPartialName(“Microsoft.SqlServer.Smo”)
>$so = new-object Microsoft.SqlServer.Management.Smo.ScriptingOptions
>$so.IncludeDatabaseContext = 1
>$so.ScriptDrops = 1
>$so = NoFilegroup = 1
>dir | %{$_.Script($so) | out-file c:\Tables.txt -append}
**I went ahead and assumed that we wanted to keep our advanced scripting options in there.  Especially since it’s a good idea to have a database context in code like this.
Ok, this is where we’re going to start from.  Let’s start making this look like we want it.  First, let’s put the scripts into a text file named after the table itself.
>dir | %{$TableName = $_.Name; $_.Script($so) | out-file c:\$TableName.txt}
I’ve put my changes in red for you.  I assigned the name of the current object to a variable called $TableName.  I didn’t have to do this, but it makes the script easier to read when you see it used later in the line.  If you have a lot of loops inside of loops it can be hard to keep track of which one you’re in so just seeing a “$_” everywhere can get hard to decipher.  You can also see that I enclosed the file name in quotes and used the variable instead of the hardcoded “Tables.txt” that was there before.  Now the file name will be created with the name of the current object.  I also took away the ‘-append’ since we’re creating a new file each time, we really don’t need it anymore.
We still need to deal with the schema though.  We can do that in a couple of ways.  We can either create a schema folder or make the schema part of the file name itself.  First we need to create a variable to hold the base path for our scripts.  We’re assuming that all of your scripts will be housed in a central location and if they’re not, you’ll want to make it easy to change where they’re stored.  So keeping a base path variable ensures that you only have a single place to change the path instead of having to search the script for it and possibly making a mistake.  So here’s our base path:
$BasePath = “c:\DBscripts”
Now we need to test whether the schema folder we’re going to write our scripts to already exists, and if not, we’ll create it.  This is very useful because it makes your script more dynamic should someone add a new schema without telling you (and they probably wouldn’t tell you).
$Schema = $_.Schema;
IF ((test-path $BasePath\$Schema\Tables) –eq $False) {MKDIR $BasePath\$Schema\Tables}
Here I test whether the path already exists.  If it doesn’t then I create it.  Notice I specifically scripted it for tables.  This is because any schema can have different types of objects and I want the entire schema scripted under its own folder.  So once I open the schema’s folder I can potentially see folders for Tables, StoredProcedures, Functions, Views, etc.  It’s just a good way to organize things, but you can organize yours however you like.
Now I need to add the date element.  I prefer to have the date out front because I’m usually looking for things by date, but again, you can organize yours however you like.  Here’s an example of how my path will look when I’m done:
C:\DBscripts\20101206\dbo\Tables
So here’s how to add the date element:
$ExecutionDate = Get-Date –Uformat “%Y%m%d”;
Now I need to add that to my test-path line.  This is going to make sure that every day you run this, it’ll create a new path for that day’s date.
$ExecutionDate = Get-Date –Uformat “%Y%m%d”;
$Schema = $_.Schema;
IF ((test-path $BasePath\$ExecutionDate\$Schema\Tables) –eq $False) {MKDIR $BasePath\$ExecutionDate\$Schema\Tables}
Again I put the changes in red.  And the only thing really left to do is to put the new dynamic path into the loop where the file gets created.
>$FullFilePath = “$BasePath\$ExecutionDate\$Schema\Tables”
>dir | %{$TableName = $_.Name; $_.Script($so) | out-file “$FullFilePath\$TableName.txt”}



Just to make things easier I created a new variable with the full path.  It just makes the loop easier to read.
Now, since I’ve done a lot of talking in between these lines of code, here’s our final version of the code so you can see it all together.
>[reflection.assembly]::LoadWithPartialName(“Microsoft.SqlServer.Smo”)
>$so = new-object Microsoft.SqlServer.Management.Smo.ScriptingOptions
>$so.IncludeDatabaseContext = 1
>$so.ScriptDrops = 1
>$so = NoFilegroup = 1
>$BasePath = “c:\DBscripts”
>$ExecutionDate = Get-Date –Uformat “%Y%m%d”;
>$Schema = $_.Schema;
>IF ((test-path $BasePath\$ExecutionDate\$Schema\Tables) –eq $False) {MKDIR $BasePath\$ExecutionDate\$Schema\Tables}
>$FullFilePath = “$BasePath\$ExecutionDate\$Schema\Tables”
>dir | %{$TableName = $_.Name; $_.Script($so) | out-file “$FullFilePath\$TableName.txt”}
Ok, so I’ve given you a way that you can script your schemas and some good reasons behind the way I did it.  You’re of course free to organize your folders as you see fit, and you can make this whole process work any way you wish.  What I didn’t do was provide you with the complete solution.  There’s nothing wrong with this code, but you’ll need to decide whether you want to do this for a single server, or for multiple servers.  If you do it for multiple servers you may want to add the server name to your base path.  And you may want to add the database name as well.  These are decisions I can’t make for you, but if you follow the formula I’ve given you, you should be able to put these things into the script with little trouble and be up and running in no time.

PowerShell Basics for DBAs


PowerShell Basics for DBAs


As the DB field gets more mature and as DBAs become more seasoned, so do companies. So companies are starting to keep more and more data, which in turn means more and more servers, which in turn means that DBAs can no longer manage things the way they used to. For example, it’s no longer acceptable to run through your morning checklist to make sure that servers are behaving and that backups are ok. This is where PowerShell comes into play. DBAs can now not only manage multiple servers at once, but they can do it very easily with often times very little coding.

Familiar concepts

PowerShell is easy because it presents you with things as drives. For example, if you wanted to access your C-drive you would use “CD C:” in PowerShell, just the same as you would in DOS. However, if you wanted to access your registry as well, you would also access it like a drive by typing “CD HKLM:” for example. Or if you wanted to access SQL Server via PowerShell, you’d type “CD SQLSERVER:”. So you can see that the concept is very simple… everything can be accessed as a drive.
Now, let’s say that you want to connect to a SQL Server, and to a specific DB. You’d simply type:
>cd SQLServer:\SQL\ServerName\Default\Databases\DBName
And with everything exposed as a drive path, you already understand the structure of everything you’ll do in PowerShell. It also makes it really easy to switch between servers, or databases, or any other objects because all you have to do is replace the name of the object and leave the rest the same.
Now that I’ve explained how PowerShell is organized let’s take a quick look at how you get information about the different drives you can work with. The command for finding drives presented to you is ‘psdrive’ and you type it at the PowerShell prompt like this:
>psdrive
And when you do, you’ll get output that looks something like this:
PowerShell commands: psdrive
Of course, your output will look slightly different depending on what you have loaded on your box, but a lot of it will be the same too. So looking around here you can see several drives listed.  I’ll go through some of the more interesting ones:
  • Variable – here you can see all the built-in and user-defined variables for your active session.
  • Function -- here you can see all the built-in and user-defined functions for your active session.
  • Env – here you can see all the environment variables and values for your box.
  • HKCU – HKey_Current_User registry hive.
  • HKLM – Hkey_Local_Machine registry hive.
And you can work with each of these just like you would the local file system.
Another major aspect of PowerShell is the cmdlet. These are the same as commands in DOS only they’re much easier to work with. The thing that makes them so much easier is their consistency. Cmdlets are always in the verb-noun format… always. Some good examples of cmdlets are get-service, out-file, get-process, format-table, and get-content. In the coming articles these cmdlets (and more) are going to become your best friends.



To use a cmdlet you have simply to type the name of it at the prompt and pass it whatever parameters it needs. Sometimes a cmdlet doesn’t need any parameters to give you a default output. Take get-service for example. It works all by itself. So if you type get-service at the prompt like this:
>get-service
You’ll get output that looks like this:
PowerShell commands: cmdlets
But what if you wanted to pass it a parameter? That’s the other thing that makes cmdlets so easy to work with. They all accept parameters in the exact same way. To pass a parameter to a cmdlet simply type “-“ followed by the parameter name, then a space and the value you want to pass in. Here’s an example using get-service:
>get-service –computername Server1
See, no more wondering if you have to use a dash or a slash, and no more wondering if you have to put a space between the parameter or not. They all work the same. Of course, for all string inputs you’re welcome to use double or single quotes if there aren’t any spaces in the string it isn’t necessary. All cmdlets handle errors in the same way too. Each one has a set of common parameters that they all support: debug, errorAction, ErrorVariable, and WarningAction, just to name a few. This means that error handling and debugging are built into every cmdlet so functionality will be the same across the board no matter if you’re working with SQL Server, IIS, Exchange, or anything else. It’s all the same.

Wednesday 28 March 2012

10 Tools Every Network Administrator Should Have On Their Thumbdrive


10 Tools Every Network Administrator Should Have On Their Thumbdrive


Working as a systems administrator oftentimes requires you to interact with many different computers in a single day. One minute you're dealing with cranky users complaining about slow performance on their PCs and the next minute your soothing badly behaving domain controllers.

Having to interact with so many different PCs makes it all the more advantageous to keep a core set of tools with you at all times. Many useful tools can be run from a portable storage device and some have even been specifically designed to run on portable storage devices. The former are what I call "portable compatible" and the latter can be considered "stealthy". That is, some apps can work on a portable device but are likely to add edits to the local registry while "stealthy" apps make no changes to the registry and ostensibly leave no traces behind. With that said, here are 10 things that I believe every Windows Administrator should have on their USB thumbdrives:

1. The Sysinternals Suite

As of June 2009 there are 66 tools in the suite. A thorough understanding of each of them is improbable (unless your name is Cogswell or Russinovich), however some of them have more readily understandable uses than others. For instance, Process Explorer allows you to see deeply into each running process, its CPU history, which executable was used to launch the program, where it is and what switches were used when launching. PsExec allows you to run any process on a remote system. Ever wanted to run another computer's command prompt from your machine? PsExec is the tool to do it with! Conversely, PsKill can kill remote processes without having to install a utility on the remote computer.
As a bonus, each of the Sysinternals tools are available "live" at "live.sysinternals.com\tools" which is essentially just a file share open to the internet. You can access any of their tools from a command prompt or the run box by using a standard UNC path like this: \\live.sysinternals.com\tools\[toolname]. Furthermore, since it's a file share, you can map a drive to that UNC path and give it a drive letter (thanks go to Ed Bott for cluing me in to that one). Even if you lose your thumbdrive you can still have easy access to the latest and greatest sysinternals tools on any computer with a network connection!

2. Windows Support and Resource Kit Tools

There's a bit of confusion about this topic. Before Vista and Server 2008, the resource kit tools were bundled with their respective resource kit book series (E.g. the Windows XP Resource kit) but they could also be downloaded from Microsoft's web site. The support kit tools were included with the OS installation media and could also be downloaded from Microsoft's web site. Nowadays, while both Vista and Server 2008 have resource kit tools, it seems that at the time of this writing they can only be acquired through purchasing the Vista or Server 2008 Resource Kit book set.
Furthering the departure from normalcy, it appears that the term "Support Tools" has been abandoned in favor of the term "Remote Server Administration Tools" (RSAT). Instead of downloading the toolset separately they are now "built in" to Server 2008, but you have to add them first through the Add Features Wizard. If you want to copy them to your USB drive, the only way that I know to retrieve them is to install the category of RSAT tool that you want and then search through WINDOWS\System32 for the desired tools. It seems that there is no single collection of updated "Support Tools" (or RSATs as they're now known) that you can download. If it sounds cumbersome to move the tools to a thumbdrive it's because it is. If anyone else has a better way, please comment! If you choose to use the Server 2003 / XP support tools you'll need to unpack the support.cab file from the support\tools folder on the installation media. My preference is to simply open it with Windows Explorer and drag 'n' drop the entire cabinet file's contents into a folder on my USB stick. There are several .dll, .vbs, .chm and other files included. The tools weren't designed with portability in mind but they should work (I say "should" since I haven't ever had to use each and every tool from a portable device nor do I know anyone who has).

3. All Purpose Network Scanner

Nary does a day go by that your average admin doesn't have to scan a network, IP range or port scan a node. Having a good network scanner handy can greatly expedite the network troubleshooting process. I prefer to use SoftPerfect's Network Scanner which does not need an installer and can run without administrator credentials.
10-Things-every-Windows-Administrator-Should-Have-on Their-Thumbdrive-1
Other key features include the ability to list file shares (including hidden shares), send WoL packets, remote shutdown PCs, detect your external IP address, scan for logged on users and more. However, If you want something much more powerful that can do more security oriented audits, you could use the portable version of nmap 3.8.

4. Portable Web Browser

Having your own browser with its own preferences and bookmarks is nice but having it available to you on any PC is even nicer. There are portable versions of several of the popular "non-portable" web browsers such as FireFox Portable and Opera-USB. Those two browsers specifically state that they do not leave any personal data behind on the local system's hard drive. There are also some more obscure offerings such as portable versions of Avant, Maxthon and Sleipnir. For even more security, you could try the xB Browser from XeroBank. The xB browser is designed to work with either the Tor network or the arguably more secure XeroBank anonymity network.
10-Things-every-Windows-Administrator-Should-Have-on Their-Thumbdrive-2
"What about IE!" some may be howling. While there's no official portable version of Internet Explorer, there is a possible alternative. Using FireFox portable, you can install IE Tab and switch to the explorer rendering engine as needed. Using this method I was able to update an XP Pro machine using the Windows Update website (which rejects all browsers except Internet Explorer) using Firefox portable. However, I was unable to empirically test if IE tabs edits the registry (I suspect that it does). Whether traveling between a home and work computer, staying secure on a public machine or helping out Grandma with her new gaming rig you'll be happy to have a trusted web browser that's always there for you.

5. TrendMicro HijackThis

What Windows admin hasn't had to deal with some kind of suspected malware infection regardless of whether or not an anti virus product was in place? Servers or clients, it makes no difference. Having some kind of anti malware detection program with you at all times is a must. Originally coded by Merijn Bellekom and later sold to Trend Micro, HijackThis has become one of the foremost tools used in the fight against malware. While not a "stealthy" portable app (it leaves registry edits behind) the .exe is completely self contained and can run perfectly fine on a portable drive. Best of all it's completely free and there is no EULA prohibiting it from being used in a business environment like there is for other free anti-malware products (Adaware Personal and MalwareBytes Anti-Malware, for example).  The program's primary usage is to scan key locations in the Windows registry and various places on your hard drive and then make a log file of the entries and files that it finds. The things that HijackThis finds may not be bad in and of themselves (in fact, most are innocuous). The real power of HijackThis comes from the community of volunteer logfile analyzers and the automated www.HijackThis.de web site.
10-Things-every-Windows-Administrator-Should-Have-on Their-Thumbdrive-3
HijackThis also includes some lesser known features such as a startup list generator, process manager (similar to task manager), the ability to delete a file on reboot, the ability to delete an NT service, scanning for hidden data streams and an Add/Remove programs editor (handy to delete entries for the occasional application that uninstalled but didn't remove its entry in the Add/Remove Programs list). Because of the power of HijackThis and the potential for disaster if you delete the wrong files, please read some tutorials before you jump into it. This is an excellent guide to start with.




6. OpenOffice Portable

Wherever you go and whichever computer you're using at the moment, the ability to create and edit a diverse array of document types is crucial. You never know when you'll need to hand craft a quick XML file, quickly modify your boardroom presentation or create a spreadsheet that aids in the creation of scripts  OpenOffice is the premier Open Source Office Suite for Windows, Linux and OS X and, thanks to the folks at PortableApps.com, it now comes in a portable format. It includes Writer (text document creator), Calc (spreadsheets), Impress (slide presentation creator; think PowerPoint), Draw (self explanatory), Math (mathematical formula creator) and Base (database creation tool; think Microsoft Access). Open Office can open many document formats including Office 2007 .docx and .xlsx files. It can also save files in Office 2000 format (no further up the Office version chain, though) as well as a host of open formats.
10-Things-every-Windows-Administrator-Should-Have-on Their-Thumbdrive-4

7. Terminal Emulator

Many devices (especially networking equipment) have an RS-232 (Serial), SSH or Telnet interface (horrors!) that you need to connect to in order to do some deep hacking. One of the more popular terminal emulators is PuTTY and now it comes in a portable version: portaPuTTY. PortaPuTTY has been conveniently modified to store configuration and session data in flat files rather than the registry.  There's also the similarly named PuTTY Portable which is a PortableApps.com application.
10-Things-every-Windows-Administrator-Should-Have-on Their-Thumbdrive-5
The main PuTTY application has been forked to produce KiTTY, which itself is not portable. However, (you guessed it) it has been further forked to produce its own portable version: Portable KiTTY. KiTTY includes some more features than PuTTY so check the feature list to see if they would actually be useful to you. Another free option is TunnelierPortable or TunnelierU3 (designed to work with U3 devices) which are based on Tunnelier from bitvise (which officially approves of these "fan projects"). However, make sure to read the Tunnelier license agreement to understand under what circumstance you are obliged to purchase a license.
10-Things-every-Windows-Administrator-Should-Have-on Their-Thumbdrive-6

8. Password Manager

Keeping track of passwords is hard and as a result people tend to write passwords down on slips of paper and "hide" them in a secure place (for the record, placing sticky notes under your desk phone is not a secure place). As a result, many people reuse the same password or small pool of passwords for many different accounts so they won't forget them. Password managers can aid in the implementation of better security by allowing you to have complex passwords for each account without having a memory like Kim Peek. KeePass is an open source tool released under the GPLv2 license and is one of the more popular password safes. Fortunately, it also comes in a portable edition. KeePass database files are encrypted with twofish or AES 256 encryption algorithms so if you lose the USB drive, it would be exceptionally difficult for someone to crack the file. Version 2.0 of KeePass supports keeping the password file on an HTTP/FTP server which could conceivably increase the portable uses for the application. Another portable password manager is the closed source RoboForm 2 Go.
10-Things-every-Windows-Administrator-Should-Have-on Their-Thumbdrive-7
It is offered in a U3 and a non U3 format. They even offer their own custom RoboForm USB key. RoboForm can fill in long registration forms with one click, it can synchronize your passwords between computers and it claims to be able to defeat keyloggers. There are two editions of RoboForm 2 Go; a free version and a Pro version. The free version cannot be used in a business environment for more than 30 days. Since RoboForm caters to fillin gout long web forms it is designed to closely integrate with your web browser in the form of a toolbar. It runs in the background from the portable storage device with a SysTray icon being the only visible indicator of it's existence outside of the browser. For even more portability, you can use the RoboForm Online service that stores your passwords in their cloud. There are many more password managers out there for you to evaluate, but those are two of the more popular titles. Now you can have unique massive passwords on your switches, domain controllers and other important devices and not worry about being locked out because you can't remember passwords that are more complex than the name of Star Trek characters.

9. Remote Desktop / VNC client

The ability to remotely control a distant PC is undeniably valuable. But what if you find yourself on an unfamiliar computer? Wouldn't it be nice to have pre made RDP connections with you wherever you go? Or have a familiar VNC client with you at alltimes? It would be nice, however, there is a surprising dearth of portable RDP and VNC clients available. TRAVEL@Clip (yes, it's supposed to be typed that way) is one of those few.
10-Things-every-Windows-Administrator-Should-Have-on Their-Thumbdrive-8
TRAVEL@Clip is a $25USD program that allows you to establish RDP connections (no VNC support) without the need for administrator rights. It keeps all settings in an encrypted file on the USB drive and does not make any changes to the host PC. It can save up to 9 separate connections. I was surprised that I was unable to find any stable freeware/opensource portable RDP tools, so it looks like you'll have to spend a little cash for RDP mobility. There is one possibility worth noting: ChrisControl. Intended for use on a Windows PE disc, it is said to not write settings to the registry. It purportedly allows you to connect to remote PCs via RDP or VNC. It was even supposed to be able to install an UltraVNC server on the remote computer if it wasn't already installed and uninstall it when you disconnect. That feature sounded exciting and even reminded me of Dameware Mini Remote Control's remote installation and uninstallation features… except for one important point: Dameware works. I tried two versions of ChrisControl and was never able to get it to work. Others on the web have stated that it worked fine for them. I only mention it here because you might have better success with it or future versions (if there are any) might make it work smoother. As for VNC tools that are specifically designed for portability, well, if there are any they're well hidden. It seems that most of the major VNC viewer applications are "portable compatibe" meaning that they will work from a portable storage device but none of them are advertised to not touch the local machine's registry. That may or may not be acceptable in your environment. The various major VNC viewer applications are Tight VNC Viewer (look for the package with the description "Viewer executable, does not require installation"), Real VNC viewer and UltraVNC Viewer.

10. Network Protocol Analyzer

Wireshark, seemingly everyone's favorite protocol analyzer (Although Microsoft's Network Monitor is pretty cool too), now comes in portable format. Well… pseudo-portable. For Wireshark to work it needs the Winpcap package to be installed. Every time you launch Wireshark portable it checks to see if the Winpcap driver is installed and if it isn't it asks to install Winpcap (of course, you'll need proper permissions on the computer to do that). When you exit Wireshark portable, it courteously uninstalls Winpcap in an attempt at leaving the computer as untouched as it possibly can. If you can live with those caveats, then you can wade knee deep in raw streams of network traffic on any computer you touch. If you can't live with that, you'll have to use a tool that utilizes a method called "Raw Sockets" which has some limitations but should be fine for most situations. IP Sniffer is a decent utility that utilizes raw sockets. Another raw sockets option is NirSoft's SmartSniff.
I'm sure that if you include those 10 things on your thumbdrive you'll be solving problems quicker, saving the day more often and hopefully getting home earlier. Since thumbdrives can be easily lost, consider creating a script using Task Scheduler and RoboCopy (built-in to Vista and Server 2008, included in the Server 2003 Resource Kit tools) to periodically create a backup. For even more portable applications you might want to look into the PortableApps project which has a huge collection of applications that are said to leave no traces of themselves whatsoever on the host computer. Who wouldn't want to be able to play Sudoku from their thumbdrive? Enjoy!

Finding Public Folder Calendars and their Owners


Finding Public Folder Calendars and their Owners


One of the tasks associated with the new DST changes is identifying Public Folder calendars, and their associated owners.


Finding Calendar-type Public Folders

When you create a folder that holds appointment items (i.e. a calendar-type PF), this sets the PR_CONTAINER_CLASS (0x3613001E) property on the folder to IPF.Appointment. If we need to find all calendar-type Public Folders, all we need to do is look for the Public Folders that have that type or property set to them.
  1. After you download and extract the file from the above link, open PFDAVAdmin, go to Tools > Options and enable logging.

  1. Connect to the Exchange server with PFDAVAdmin.

We are looking for the PR_CONTAINER_CLASS property of each Public Folder. So in order to find the owner we can either export the properties to a tab-delimited file containing properties for every folder in the hierarchy, or by using a more complicated custom bulk operation.
The easiest way to do this is to use Export Properties. Follow these steps:
  1. Go to Tools > Export Properties.
  2. Leave the scope set to "All Public Folders", and select an output file.
  3. In the Properties To Export list, check the box for PR_CONTAINER_CLASS. You can uncheck the other checkboxes that are already checked by default, or leave them checked. For the purposes of this article it doesn't really matter.

  1. Click OK and the export will begin.

  1. You now have a tab-delimited file containing properties for every folder in the hierarchy. You can open this file in Excel (or notepad if it's short enough) and sort by the PR_CONTAINER_CLASS column to identify all IPF.Appointment folders.
A more complicated way to accomplish this is to use the Custom Bulk Operation. By using this method you can choose to only perform the task on the folders that meet your search criteria, instead of every folder in the PF hierarchy. Follow the proceeding steps.




  1. Go to Tools > Custom Bulk Operation.

  1. Under "Specify the base folder", set it to "Public Folders".
  2. In the "Overall Filter" box, enter "(&(0x3613001E=IPF.Appointment))" (without the quotes).
  3. Click the Add button, choose "Other folder properties", and click OK.
  4. Choose "Export", select an export file.
  5. In the Property dropdown, choose "PR_CONTAINER_CLASS : 0x3613001E" and hit Add.

  1. Choose whatever other properties you want to export and add those too. Then click OK.

  1. Now you should be back on the Custom Bulk Operation window. Click OK to begin the export.

  1. This will export properties only for calendar folders.

Finding owners of Calendar-type Public Folders

After we've identified all the calendar-type Public Folders, we can also use Tools > Export Permissions option in PFDAVAdmin to export the permissions for all the folders and see who the owners are. To do that we can use a custom permissions export function to only export permissions for folders that are IPF.Appointment folders.
Follow these steps:
  1. Go to Tools > Custom Bulk Operation.

  1. Under "Specify the base folder", set it to "Public Folders".
  2. In the "Overall Filter" box, enter "(&(0x3613001E=IPF.Appointment))".
  3. Click the Add button, choose "Folder Permissions", and click OK.
  4. Choose "Export" and select your export file.
  5. Choose either legacyExchangeDN or Account Name format, and click OK.
  6. Now you should be back on the Custom Bulk Operation window. Click OK to begin the export.
You now have a file that only has permission export information only for folders where the PR_CONTAINER_CLASS is IPF.Appointment. By looking at the file you can see who the owners of all these folders are.