Skill: Intermediate

Reorganizing the way you access your data and lose your drive letters in the meantime

(published on Microsoft Technet Magazine USA, May 2012)

The immense growth of data and the costs of storing and securing it are a major concern for all IT departments. Often the data is stored in an arbitrary way based on practices that go back to the eighties and accessed by the users using mapped drives, which are just as antique.
Imagine an organization called Contoso that consists of locations in New York and San Francisco that used to operate as independent companies, each with its own IT infrastructure. Some of the most visible differences are the usage of different drive mappings per location and the difference in how data is organized and stored.
Management has decided that both locations need to start working as one entity and the different drive mappings are causing quite a challenge for sharing data among the locations and to make the situations even more challenging, also different drive mappings are used for applications. Managers of newly created or merged departments and responsible for all the information for that department currently stored at the different locations, now suddenly are experiencing that they cannot find the information they need. So how to solve this so all locations efficiently can work together?
How this issue can be tackled without spending too much money will be clarified in more detail in the first part of this article called “Luctor et Emergo” (I struggle and emerge, slogan of the province I live in). Although this whole article is targeted for system administrators, this part is especially interesting for IT management.
The second part of this article called “Who needs mapped drives anyway” will discuss the more technical part on how to get rid of the antique manner of accessing data and introducing network locations as an alternative for a structured and uniform data access.

Luctor et Emergo (I struggle and emerge)

The first part of this article is only partly an IT topic although the effect hits every IT department hard. I am talking about the exponential growth of data and the unstructured storage of it. As long as people regard this as being IT problem, it never will be solved. The only solution is making it an overall business challenge. As such this project needs to be a business one and not a purely IT one. Once (higher) business management sees the benefits of restructuring and cleaning up the data, the implications can be enormous in means of efficiency! For the IT department such a project can be a major chance to present itself as a real internal business partner instead of “those guys with attitudes who only cost money and fix what is not broken”.

The goal is to have the business working in a structured manner where each department or other logical unit has appointed one “owner” who will be responsible for the way data is stored and shared (including who has what access to where). The departmental owner together with his/her colleges will design a new structure for the data and evaluate the existing data. Next step to be taken is defining whom has access to the data based on roles (job functions).
A large percentage of stored data is old and almost never accessed anymore; recent studies even say that only 7% of stored data is actually relevant to the user! This outdated data can be either deleted or moved to an archive on cheap storage (like a RAID 5 NAS instead of an expensive SAN). The owner is responsible to put all “old” data into an intermediate archive folder within the allocated storage for the department. The IT department will move the data in that folder to the actual archive twice a year. I advise to make the actual archive read only except for the IT department so no one will be tempted to work on those old files. If a document is actual again, it can be copied over to the live environment and worked on.
The owner is also responsible for deleting files that are not needed anymore, a challenge on its own because most people are very reluctant to delete anything!
The most important part of the project is restructuring the remaining data so that everyone in every location knows where to find the information they need. I recommend that every department at least have the following four folders: Management, Common, Public, and Archive. The Management folder speaks for itself, it will host all information of the departmental manager(s) and access will be restricted to those persons. The Common folder is where information is stored that should be accessible for all persons within the department and Public is where information will be stored that should be accessible by everyone within the organization. Use the Archive folder as a dumpster for the information that is not frequently used anymore and the content of the Archive folder will be moved once or twice a year by the IT department to the real archive location based on cheaper storage.
The content owner within the department is also responsible for maintaining permission levels for the department folder. A good way of setting this up is using permissions based on roles (job descriptions) within the organization like procurement-srbuyer, procurement-jrbuyer, procurement-secretarial etc.
Once the schema including the new folder structure and the permission structure is written down (Excel will do nicely), the overview will be given to the ICT department. Of course, a limit must be in place on how deep the different security levels go, I suggest no more than three levels deep from the root of the department folder and use only Modify, Read-only and traverse folder permission types.
The ICT department will create the security groups based on the roles defined, add the appropriate persons to those groups and will setup the permissions per group as stated in the documentation provided by the departmental owner.
The owner and the owner alone will communicate any mutations within the department to the IT department.
Although the whole process described above is not that lengthy to explain, it can and most likely will be a lengthy one to implement, and buy-in from all parties involved is an absolute need! However once this structure is in place, it will have many advantages from a logically defined structure in storage and permissions to a reduction of the amount of data stored on expensive storage.

Who needs Mapped Drives anyway?

In the good old days of IT when networking was becoming popular, an easy method of accessing network-stored data was needed. The solution at hand was easy to implement and because data demands were not that high yet, not complicated to maintain. The mapped network drive using a letter was born. Now in the second decade of the 21st century we are still using these drive mappings however, the amount of data has exploded, and company structures are changing with a requirement of easily sharing data between locations. Especially within IT, technology advances almost at light speed and it is an everyday challenge to keep up to date with everything new. However, we still cling to a more than 30-year-old technique to access our data!
Locations in many cases have different drive letters for different kind of information, which makes it almost impossible to share information in a logical and efficient manner. One of the side effects of this is that data is being copied over to the other locations thus increasing the already enormous amount of data with duplicated files.
Looking at Contoso’s locations New York (figure 1a) and San Francisco (fig 1b), it will be painfully clear that sharing information between those two locations will be a major challenge. The structure of the file storage of both locations differs quite a bit because they were used to operate as more or less independent entities. With the requirement to start working as one organization, some stringent actions are needed to enable efficient file sharing.

image001image002

 

 

 

 

 

 

 

 

 

 

 

 

Figure 1a. File structure New York and Figure 1b. File structure San Francisco

First task is creating a Distributed File System (DFS) folder structure. DFS acts like a blanket over the file structure and is completely transparent for the user. It is very easy to change the server where the files are stored on without the user ever knowing about it. DFS will make the life of every system administrator a bit easier and as the proverbial cherry on top provides an intelligent method of having multiple replicated folders in more than one location, where each change is replicated in a very efficient manner. When a file changes not the complete file is transmitted but only the changed bytes, which can save a lot of bandwidth!

We start with determining the required folder structure. The root folders in our example (or namespaces) will be Applications, Archive, and Departments. Of course, in real live scenarios, the folder list will be more extensive. For one reason or another DFS management tool uses the terms folders and namespaces for the same thing. In the left pane, they are referred to as folders and in the right pane as namespaces. Underneath each name space or folder, other name spaces and/or folder targets are created. Folder targets are pointing to the actual physical location on file servers. An example of a DFS structure is shown below.
image003

 

 

 

 

 

 

 

 

 

To avoid a wild growth of folders at the first level within a department I suggest restricting the creation of folders directly on the file server(s) at this level. This means that all first level folders must be created in DFS and linked to the appropriate folders on the file server(s).This makes it impossible to let users ‘pollute’ the structure with all kinds of folders. For example, the folders Finance, and the first level folders Accounting, Archive, Business Control and so on are created by IT and no user can create extra folders. Whenever a new first level folder is needed, the owner must include this in his/her overall schema of the departmental folders and communicate this to IT.
Once the DFS structure is in place, it needs to be made available to the users in an easy to use manner. Of course, we could use drive letters again to map the root folders but that is so last century! Therefore, we are going to do something completely different. In Windows Vista and Windows 7, it is possible to create network locations, which are listed underneath the physical drives in Windows Explorer (figure 3)
image004

 

 

 

 

 

 

 

 

Figure 3

Let’s be honest, this is far more appealing to the users than drive letters which don’t really have a meaning and are arbitrarily chosen by us IT people!

To create the network locations for everyone, a PowerShell script is used which will run as a logon script. The script addnetworkloc.ps1, found on a TechNet forum (http://social.technet.microsoft.com/Forums/en/ITCG/thread/fd34260e-ee4c-47c5-8c69-872a5239745f), is as follows:

param(
[string]$name,
[string]$targetPath
)

# Get the basepath for network locations

$shellApplication = New-Object -ComObject Shell.Application
$nethoodPath = $shellApplication.Namespace(0x13).Self.Path

# Only create if the local path doesn’t already exist & remote path exists
if ((Test-Path $nethoodPath) -and !(Test-Path “$nethoodPath\$name”) -and (Test-Path $targetPath))
{
# Create the folder
$newLinkFolder = New-Item -Name $name -Path $nethoodPath -type directory
# Create the ini file
$desktopIniContent = @”
[.ShellClassInfo]
CLSID2={0AFACED1-E828-11D1-9187-B532F1E9575D}
Flags=2
ConfirmFileOp=1
“@
$desktopIniContent | Out-File -FilePath “$nethoodPath\$name\Desktop.ini”

# Create the shortcut file|
$shortcut = (New-Object –ComObject|     WScript.Shell).Createshortcut(“$nethoodPath\$name\target.lnk”)
$shortcut.TargetPath = $targetPath
$shortcut.IconLocation = “%SystemRoot%\system32\SHELL32.DLL, 85”
$shortcut.Description = $targetPath
$shortcut.WorkingDirectory = $targetPat
$shortcut.Save()

# Set attributes on the files & folders
Set-ItemProperty “$nethoodPath\$name\Desktop.ini” -Name Attributes -Value ([IO.FileAttributes]::System -bxor [IO.FileAttributes]::Hidden)
Set-ItemProperty “$nethoodPath\$name” -Name Attributes -Value ([IO.FileAttributes]::ReadOnly)
}

This script is run with two arguments, the first one being the desired name of the network location and the second one being the physical locations in UNC format, example:
addneworkloc.ps1 Departments \\contoso.com\departments.
To be able to run it as a logon script two group policy objects needs to be created in Active Directory. The first one will be used to enable PowerShell script to be used for logon scripts. By default, PowerShell scripts are limited to run because security settings built into Windows PowerShell include something called the “execution policy”. The execution policy determines how PowerShell scripts run. The default execution policy setting is set to “Restricted; meaning that scripts – including those you write yourself – will not run. This can be fixed manually but we are going to do this by GPO. In the Group Policy Management Console, we create a new GPO and call it something like “GlobalC-Enable PowerShell”. In this GPO under Computer Configuration, Policies, Administrative Templates, Windows Components, Windows PowerShell the option Turn on Script execution is set to enabled, which allow all. GlobalC is a naming convention I use where Global states that the GPO is used for more than one Organization Unit and C means that the GPO contains computer configurations.

The second GPO is called “GlobalU-Set Network Locations” and is used for the PowerShell logon script, which is configured at User Configuration, Policies, Windows Settings, Scripts, Logon as show in figure 4:

image005

 

 

 

 

 

 

 

 

 

 

 

Figure 4. Logon script for creating new network location.

Once this GPO is active, all users within the scope of the GPO will get the new network locations added. During the migration period, these can exist alongside the old drive mappings, which can be deleted after some time.

What about applications that still need drive mappings?

Although working with network locations instead of mapped drives works perfectly for accessing data, it can cause some issues with applications that access network resources. Most modern applications now support either network locations or UNC paths so in that case only shortcuts and/or configuration files need to be adjusted, however, there will always be applications that cannot work without using drive letters. So how are we going to resolve this issue? Well several methods are possible. The easiest but also the most ugly one is creating a CMD file that maps the drive, runs the application and when the applications end, deletes the mapped drive. The ugly part of this is that as long as the program runs, a black CMD box will be visible.
Another solution can be creating a Group Policy with user preferences:

image006

 

 

 

 

 

 

 

 

 

 

A new mapping can be created easily:

 

image007

 

 

 

 

 

 

 

 

 

 

 

 

You even can configure item level targeting where the setting only will be applied if certain condition are met:

image008
 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Summary

Getting rid of drive mappings is in a technical point of view not that difficult. The biggest challenge will be raising awareness within your company that data needs to be restructured and revised. Using network locations instead of drive letters is just a small tool to help your colleges to access their data in a more logical manner. If you achieve this, your IT department will get a jump-start in being the business partner it needs to be.
I hope that this article will inspire you to take the leap forward and you too will be able to say, “Who need mapped drives anyway

Share on FacebookTweet about this on TwitterShare on LinkedInEmail this to someone