SQL Server error when using Named Instance

I encountered this problem a while ago, when I was installing a test lab with Configuration Manager 2012 R2 in my own environment. I installed all prerequisites and was about to do the SQL database setup in the SCCM setup wizard, so I typed in all the correct information as well as the “Named Instance”, which I chose doing the installation of the SQL Server Database and the exceptions to the firewall.

At this point I knew it was all running as well as working – I could connect to the database, open SQL Server Management Studio and only needed the SCCM Installation Setup Wizard but when I pressed “Next” I got the following error message.


  • The Instance Name were created and typed in correctly
  • At this point I assumed the port for the SQL database would be set to 1433 (1433 is our default)
  • The exceptions in the firewall were made (Tried turning it off completely)
  • The account I used for this was the default Admin Account, configured by the SQL server

Verifying the above several times, I could not figure out why I got this error so I kept trying to get it to work when I remembered that I did not use a Default Instance Name (MSSQLSERVER) but instead created my own. This is the reason for this error as ConfigMgr does not support dynamic ports, and by choosing a Named Instance, it automatically configures it with a default port. NOTE: It is not wrong to use Named Instances, as you will be able to do better isolation in your environment than using default names but do what you prefer in your own environment.

You need to do the following changes to make it work when using a Named Instance:

  1. Open “SQL Server Configuration Manager” (Search for it if  you can’t locate it)
  2. In the “SQL Server Configuration Manager”, you need to expand “SQL Server Network Configuration” -> Click “Protocols for xxx” (xxx=The Named Instance) -> Right click the “TCP/IP” in the right side -> Click “Properties” 2
  3. You will see two tabs, Click “IP Addresses” -> Scroll down and locate the “IPALL” settings (as this is the information the SQL Server database use) -> Remove so it is blank in the “Dynamic Ports”   -> Type in the default SQL database port 1433 (1433 is also the default TCP port when choosing Default Instance name) for the “TCP Port” option.                                                                               3
  4. You could also do these changes to all the IP settings (IP1, IP2, IP3 etc.) if you want to be sure, but it is not required and should work fine if changes made for the “IPALL” is done, as in “step 3”
  5. To apply the changes you will need to restart the SQL Server Service for the Named Instance of yours, as shown in picture below. Press “SQL Server Service” -> Choose “SQL Server xxx” -> Right click and choose “Restart” 4
Udgivet i Uncategorized | Tagget , , , , | Skriv en kommentar

Windows 8 / 8.1 sysprep fails during capture

Problem case:

Creating a Windows 8 or Windows 8.1 reference image, using Microsoft Deployment Tool or System Center Configuration Manger, manually using sysprep or during Prepare OS (sysprep task sequence step) fails.

When running sysprep manually within Windows you will see this error box:


Running sysprep through Microsoft Deployment Tool or System Center Configuration Manager you will see the following error entries in SetupErr.log:

<Date> <Time>, Error SYSPRP Package <PackageFullName> was installed for a user, but not provisioned for all users. This package will not function properly in the sysprep image.
<Date> <Time>, Error SYSPRP Failed to remove apps for the current user: 0x80073cf2.
<Date> <Time>, Error SYSPRP Exit code of RemoveAllApps thread was 0x3cf2.


Sysprep has an additional provider that is added in Windows 8 / 8.1 to clean Appx packages and generalise the image. The provider will only work if the Appx package is a per-user package or an all-user provisioned package.

Per-user package means that the Appx package is installed for a particular user account and is not available for the other users of the machine.
All-user package means that the Appx has been provisioned into the image so that all users who use this image will get the App.

If an All-user package provisioned into the image was un-provisioned manually from the image, but not removed for a particular user, then during sysprep, the provider will run into an error cleaning out this package. The provider will also fail, if an All-user package provisioned into the image was updated by one of the users on the reference machine.

Additional information: When a Windows 8 / 8.1 machine has been online for 60 minutes it automatically starts to download (stage) Appx packages and updates. This is useful because, when a user manually updates the app through the Windows Store app, the update files are already on the machine and don’t have to be downloaded. Some of these packages might install Per-user and not per All-users. This practically means that if you capture the reference machine within 60 minutes you will be able to successfully sysprep the machine. 


IMPORTANT: Remove all provisioned Per-user Appx packages before running sysprep on the reference machine. Otherwise you might have to start all over building your reference image from scratch.

Before starting the sysprep procedure make sure to remove all additional users (except administrator) logged onto the computer, along with their associated files.

Deleting all staged Appx packages:

Run the command: “Get-AppxProvisionedPackage -online | Remove-AppxProvisionedPackage -online” at the PowerShell prompt. This will list all default installed Appx packages (which are installed per user) AND since it is piped to Remote-AppxProvisionedPackage, it will remove them all.


Note: You will get RED errors – this is expected due to dependencies.

Deleting specific staged Appx packages:

To get a list of the provisioned packages, run the following command:

Get-AppXProvisionedPackage -online | select PackageName

To remove a particular provisioned app, run the following command:

Remove-AppXProvisionedPackage -Online -PackageName <PackageName>

Now, capture your reference image and enjoy your Windows 8 / 8.1 deployment.

Udgivet i Uncategorized | Tagget , , , , , | Skriv en kommentar

2015 – The “Year of the Super Mega Breach”?

For the past few years, data breaches have been increasing massively—rising in both frequency and scope. 2011 used to be known as the “Year of the Data Breach”. Then came 2013 when data breaches overshadowed those of previous years so much that Symantec took to calling it the “Year of the Mega Data Breach”. Last year, in 2014, we saw many large scale breaches hit big companies around the world. The impact was global and devastating to many. As a result of the recurring breaches, including one high-profile case involving Sony, President Obama proposed a broad-ranged cybersecurity protection plan for the US that would see businesses and government sharing threat data and working together in order to prevent malicious attacks and prosecute those behind them. The data collected through such attacks ranges from personal communication to customer records and intellectual property. Every breach is damaging to users, clients, infrastructure and overall brand identity. Some of the more prominent breaches can be named:

Sony was hit with a major attack that had the whole company on their knees for several days until they could find and plug the hole. The breach allegedly came after Sony announced the launch of the movie “The Interview” about North Korea and its leader, but more likely evidence points to the attack coming from an internal source: disgruntled, code-savvy employees known as “hacktivists”. Wired has a good article with more details about it under the headline, “Sony Got Hacked Hard: What We Know and Don’t Know So Far“.

This year, AT&T had a breach where data from about 280.000 customers was stolen from their call centers and sold to third-parties. These breaches were caused by internal workers who stole names, Social Security numbers, and other information. AT&T had to pay a record-breaking $25 million fine to settle a complaint filed by the Federal Communications Commission (FCC). More information can be found in The New York Times article, “F.C.C. Fines AT&T $25 Million for Privacy Breach“.

Besides those, JPMorgan Chase, PF Chang, Snapchat and many others have experienced large security breaches within the last year that hurt people face-on.

What does this mean?

Security breaches won’t be stopping anytime soon. At the end of April 2015, a little over 270 breaches resulting in more than 100 million total records being exposed have already been found in the US alone. The full report and current numbers for breaches in the US can be found in the 2015 ITRC Data Breach Reports.

At the 2015 RSA conference in San Francisco, RSA president Amit Yoran held an exhilarating keynote entitled “Escaping Security’s Dark Ages” where he stated:

“You don’t have to be much of a visionary to see that 2015 will become the ‘Year of the Super Mega Breach’. 2014 was yet another reminder that we are losing this contest.”

You can watch the full keynote below.

After hearing this statement, you might want to ask yourself:

Why do these data breaches happen?

Over the past year, 25% of data breaches involved system glitches that include both IT and business process failures, 44% of incidents involved a malicious or criminal attack and 31% concerned negligent employees. SOURCE: ABSTRACTA

Will it happen to me?

There is a 19% probability of data breach over the next two years if your company is dealing with a minimum of 10,000 records. But it has to be mentioned that some industries are more exposed than others. The highest risk is found in the public sector, where the probability of incurring a data breach is 23,8%, while the Energy and Utilities sector still has a risk as high as 7,5%. SOURCE: ABSTRACTA

How much could a data breach cost?

According to InformationWeek, it takes a large organization on average 31 days at a cost of $20,000 per day to clean up and remediate after a cyberattack, a number that has increased by 23% year-over-year. SOURCE: DARKREADING

The Ponemon Institute, an international research institute that measures trust in privacy and security, released a report in May 2014 sponsored by IBM called the “2014 Cost of Data Breach Study“. In their findings they revealed some common global trends:

The cost of a data breach is on the rise. Most countries saw an uptick in both the cost per stolen or lost record and the average total cost of a breach.
Fewer customers remain loyal after a breach, particularly in the financial services industry.
For many countries, malicious or criminal attacks have taken the top spot as the root cause of data breaches experienced by participating companies.
For the first time, research reveals that having business continuity management involved in the remediation of a breach can help reduce the cost.
An overview of the report can be found here.

However, the per record data breach cost within heavily regulated industries is substantially higher in most cases. This can include industries such as healthcare, transportation, education, energy, financial services, communications, pharmaceuticals and industrial companies.

What is a starting point to safeguard myself?

Performance-testing your security measures is a good starting point. Once you know where you stand, ask yourself these questions:

– Does my company have any difficulty maintaining compliance due to human error injected into access controls, users with too much access, or orphaned accounts? What regulations does my company have to follow? SOX? ISO 27001? Any others? Is my company able to match those requirements in a timely and cost-effective manner?

– Do information owners in my company always provide the correct level of access to resources for both internal and external users? How is the security managed and by whom? Am I in control of my users, their access rights, possible SoDs and such? Can I control and properly safeguard privileged accounts?

For evaluating your User and Access Management situation, go through the IAM Checklist for a more complete list of concerns.

Udgivet i Uncategorized | Skriv en kommentar

User-friendliness is not a fad. And yes, it’s for enterprises too.

Lots of articles on user-friendliness highlight how Apple revolutionized user interface design with the iPhone, and now the design of almost everything in the world of software has improved. So for every conceivable task, you now have your choice of simple, sleek, easy-to-use apps to help you do it better, and maybe even make it a bit more fun. Great news for people looking for new indie music, or trying to share cute kitten pictures.

But one may wonder… is this whole ultra-user-friendly movement really relevant for enterprises?

I mean, with things like security, architecture, and vendor relationships to worry about, who has time to care about whether or not the software that employees use is user-friendly or not?

You might be surprised to learn just how much some enterprises have benefited from providing employees with more user-friendly products. And these companies invested in usability decades ago – yes, even long before the iPhone. Look at these two excerpts from case studies documented over 20 years ago:

As a result of usability improvements at AT&T, the company saved $2,500,000 in training expenses.
~ Bias & Mayhew, 1994

Design changes due to usability work at IBM resulted in an average reduction of 9.6 minutes per task, with projected internal savings at IBM of $6.8 Million in 1991 alone.
~ Karat 1990

Of course, nowadays it’s easy to find claims of huge benefits from an improved user experience.  For instance, in a much more recent article, Alan Langhals, a principal with Deloitte Consulting LLP made this comment:

The user experience drives adoption, and user adoption is an important first step toward realizing business value from big investments in enterprise systems.

Source: The Wall Street Journal

But what some don’t realize is that many large enterprises have understood the importance of usability for a very long time.  To highlight that fact, I’ve included some older references related to usability of enterprise software.  And for many of the companies mention, improved usability methods had a significant impact on their bottom line.  Let’s look at 5 areas where usability still claims to bring benefits for enterprise software, and we’ll see how support for these claims has existed for decades:


If users don’t like your software, they’re going to find their way around using it. This can lead to cut corners, undermine the ability of the organization to adhere to efficient workflows, and inhibit timely completion of projects.  The solution?  Give your users software that they actually like using.

In a Gartner Group study, usability methods raised user satisfaction ratings for a system by 40%; when systems match user needs, satisfaction often improves dramatically.
~ Bias & Mayhew, 1994

Training costs

Complicated software takes longer to learn. Organizations that provide their users with easy-to-use software see dramatic reductions in training time. And less training time means lower training costs.

At one company, end-user training for a usability-engineered internal system was one hour compared to a full week of training for a similar system that had no usability work.
~ Bias & Mayhew, 1994

Support costs

Having software that users can figure out on their own means less calls to help desk, which can also result in significant savings.

Design changes from one usability study at Ford Motor Company reduced the number of calls to the help line from an average of 3 calls to none, saving the company an estimated $100,000.
~ Kitsuse 1991

Employee satisfaction

No one wants an organization full of disgruntled employees. But when your employees have to trudge through frustrating and difficult-to-use software to carry out their daily tasks, this can contribute to your employee’s dissatisfaction with their work. Impact of the problem is sometimes seen in unexpected ways, as the next case study demonstrates:

One airline’s IFE (In-flight Entertainment System) was so frustrating for the flight attendants to use that many of them were bidding to fly shorter, local routes to avoid having to learn and use the difficult systems. The time-honored airline route-bidding process is based on seniority. Those same long-distance routes have always been considered the most desirable. For flight attendants to bid for flights from Denver to Dallas just to avoid the IFE indicated a serious morale problem.
~ Cooper, 1999


All of the above equals better return on investment. Check out this example:

On a system used by over 100,000 people, for a usability outlay of $68,000, (the company) recognized a benefit of $6,800,000 within the first year of the system’s implementation. This is a cost-benefit ratio of $1:$100.
~ Bias & Mayhew, 1994

Companies like AT&T, IBM, and Ford Motor Company have been wisely investing in usability for decades. Since all these reports were released, the standard for the design and ease-of-use of modern interfaces has increased significantly, creating even greater potential for increases in productivity and cost savings. Which means that sticking to that old, complicated, frustrating and difficult-to-use piece of enterprise software is probably costing you more than ever.

While design trends can change, the idea of creating user-friendly software is here to stay, and the benefits have been recognized for decades. Making people happy and saving money are two things that never really seem to go out of style.

Udgivet i Uncategorized | Tagget , , , , , , | Skriv en kommentar

Enhance benefits of implementing IDM365 – setting the right Success Criteria, right

Setting the right Criteria

Traditional project management often refers to the classical “Iron Triangle” – cost, time and quality, being the key drivers when setting the project success criteria. This traditional method succeeds when the project’s requirements and user specifications can be defined upfront. This is often called the “Waterfall” model. Most implementations follow a sequential implementation process, which have advantages of control- and task planning.

When implementing an Identity Management and Governance solution, the key benefits are automation, audit, and control of the user identity lifecycle. Primarily supporting the quality parameter while optimising the process.

When optimising administrative processes, other parameters define the success criteria – set by the contextual condition of the project, which may be assessed in order to maximise potential benefits.

Thinking out of the box may imply a square

It may be an added value, prior to the implementation, to assess the organisation’s readiness for change, since involving your organisation is crucial. In order to validate whether the proposed workflows and pilot solution meets the quality improvement goal of the project, it is evident to ensure that the proposed workflows are aligned to benefit the organisation, and not to be dictated by the system.

Contextual parameters, as the stakeholder community, are critical to maximise benefits after handover and endorse ownership of Audit and Governance features.

Beside the Iron Triangle – three more categories of success criteria are proposed to consider:

Setting Success Criteria right

Setting the Success Criteria right, must relate to the categories of success criteria in all categories, to enable a process that may be true to findings during the improvement process, and where there is a direct measurable impact post project implementation. To maximise benefits of the Audit and Governance features, setting the criteria right is important, in order to measure the effect of your investment.

Udgivet i Uncategorized | Skriv en kommentar

ActiveRoles Server

Not ready for a full-blown IDM solution, but still want to secure your AD and streamline provisioning and deprovisioning? Or does your current IDM solution only provide limited tools to manage your AD? Then ActiveRoles Server might be the solution for you.

ActiveRoles Server is part of the Dell One Identity solution (IDM) but is available as a stand-alone product. Developed by Quest Software, ActiveRoles is a well-known product used globally to provision, administer and secure more than 54 million AD user accounts. Deployments range in size from 250 to 800.000 users.

Through a set of tools, it allows you to efficiently manage users and groups while also overcoming some of the Active Directory’s native limitations.



  • Identity and access lifecycle management.
  • Automatic User- and Group Provisioning and Deprovisioning.
  • Automatize Provisioning from an authoritative data source, such as an HR or ERP system and thereby automatically gain the control of user access.

Directory Management

  • Unified Active Directory and Active Directory Lightweight Directory Services (AD LDS formerly ADAM) Management.
  • Automated group management.
  • Interfaces for Day-to-Day administrators, Help Desk, and end user self-service.


  • Controlled Administration through Roles and Rules for a true least privilege model.
  • Approval Workflow for Change Control.
  • Centralized Auditing & Reporting.


  • Empower users with self-service capabilities.
  • Compliant & Secure Access Management through Group Membership Self-Service.


  • ADSI and PowerShell support for extensibility.
  • Customisable web interfaces.

ActiveRoles Server provides you with several tools to ensure that only approved IT personal be granted access to Active Directory data e.g. through:

Access Templates – used to grant administrative users access to AD objects (Domain, OUs, Containers or individual users) and specifies the level of access the user should have.
Policies – allow you to specify a set of rules that apply to the administrative users, for example making a range of attributes required during provisioning and even specifying a specific syntax. It can also be used to automatically generate values for attributes based on given information (e.g. logon name, email address etc.).

Udgivet i Uncategorized | Tagget , , , , , , , , | Skriv en kommentar

Migrating Radius from Windows Server 2003 to 2012 R2

To follow up on my previous blog post regarding migration DHCP from Windows 2003 to Windows 2012 due to the End-of-life of Windows 2003 on July 14th 2015, I will continue down this track and provide you with a simple guide to migrate the Radius server from a source server, running on Windows 2003 to target server on Windows 2012 R2.

Export Internet Authentication Service (Radius) from Windows 2003

  • Copy %windir%\syswow64\iasmigreader.exe from a server running Windows 2012
  • Copy iasmigreader.exe to the source server into C:\WINDOWS\system32
  • On the source server in command prompt, type iasmigreader.exe and then press ENTER. The migration tool will automatically export settings to a text file.
  • IAS settings are stored in the file ias.txt located in the %windir%\system32\ias directory on the source server.
  • Copy the ias.txt file to the target server (beware the file contains passwords)

On Target server Install NPS

  • In Server Manager, add new Role Services, select Network Policy Server, Install with default values

Import Settings to Target server Running NAS

  • Open command prompt, and type Netsh nps import filename=”<path>\ias.txt”

Register the NPS server in the default domain using the netsh command

  • At the command prompt, type netsh ras add registeredserver and then press ENTER. Remember the account must, as minimum, be member of “Domain Admins”

Verify that Radius is active and supporting clients.

Udgivet i Uncategorized | Tagget , , , , , , , , | Skriv en kommentar