Knowledge Center

  • SQL Server 2008: Database Auditing Standards and Best Practices

    by Marigold Legaspi | May 13, 2015

    This will be the first in a series of blog posts in the next several months designed to touch on many of the topics covered in QuickStart’s SQL Server 2008 training courses; mores specifically database auditing with guidance and discussion for SQL Server 2008 and SQL Server 2008 R2. If the next version of SQL Server (code named ‘Denali’) adds or changes this, I will cover that towards the end of the series. In this first blog, we’ll look regulatory requirements, general organizational security practices, and auditing best practices for databases irrespective to the DBMS chosen. The second blog will continue looking at best practices, specifically what audit event data needs to be in the log and protecting audit systems and data. In the third blog, I’ll discuss the available auditing methods in SQL Server 2008/R2 and the advantages and disadvantages of each one. In the fourth and fifth blogs we’ll do a deeper dive into two specific auditing mechanisms, SQL Server Audit (built-in fine grain auditing introduced in SQL Server 2008), SQL Server Event Notifications (introduced in SQL Server 2005). Online Education2 resized 600

    SQL Server 2008 Training: Auditing Standards and Requirements

    With the growth in importance and pervasive of IT in today’s business has grown government regulations at all levels, from local government requirements all the up to federal and even international regulations and guidelines. This blog is not meant to provide legal advice against the myriad of requirements that exist for data privacy and protection. For that, you will need qualified legal guidance.

    A quick search for regulatory audit requirements generates a long list of governmental mandates. The list includes well known laws of the last decade or so, like HIPAA and Sarbanes-Oxley, for health care and financial services industries, respectively. Other regulations exist, such as GLBA, Basel II, 21 CFR Part 11, State data breach disclosure laws, FISMA, FERC, NERC {As an aside, let me introduce you to one of my favorite websites:http://www.acronymfinder.com/ – great for finding the definition of hundreds of thousands of acronyms and abbreviations). To help support the laws, various organizations and partnerships have generated guidance documents and policies, like CoBIT and ITIL.

    SQL Server 2008 Training: Auditing Best Practices – What Data to Audit

    Let’s take a look at Sarbanes-Oxley (SOX): generally, to comply, you will need to answer who changed or deleted data, made changes to database schema (e.g. dropped a table or a column) and all with special detailed emphasis on privileged users. More of a grey area is auditing unsuccessful attempts to perform those changes. You are generally not required to audit for who accessed data (reads), just for changes. SOX compliance is more about preventing manipulation of data than privacy requirements.

    HIPAA on the other hand is definitely concerned with data privacy and will include requirements for who has access – all types of access, including read access – and who has used that access. The need to audit who reads data generally adds additional complexity to auditing solutions and will reduce the choices of available methods.

    Auditing best practices require us to analyze our data needs, the regulatory requirements, any additional organizational auditing needs to create an auditing plan. As we configure auditing, we want to only audit for what we need. It is often very easy to include far more information than we need in our audit reports. Too much information can prevent us from recognizing security compliance violations which can be more damaging than just not having all the information asked for by an external audit. Most regulations state which types of information need to have access tracked.

    In my next blog we will continue your SQL Server 2008 training as we discuss specific pieces of information that all good auditing solutions include for every auditing event. We’ll also look at specific tools and mechanisms for enabling auditing compliance.

    Questions? Please share them below.  And of course, if you’re Interested in learning more, look into one of our many SQL Server 2008 training courses.

    Thank you!

    Steven Allen, QuickStart Intelligence, System Engineer

     

  • Sequencing Identity Numbers in SQL Server 2012

    by Marigold Legaspi | May 13, 2015

    Identity columns are nothing new in T-SQL.  They are great when you want a unique number to be automatically assigned every time you insert a new record.  The trouble is that while they are unique within the table, they are probably the same between tables.  Every table with an identity column has a 1 for the first record, a 2 for the second, etc.  Of course you can set the seed and increment for the identity column to something other than (1,1), but you still have to manually track which table has which seed.  Trying to track that is destined for failure.  What we really need is a way to have automatically assigned numbers for new records, but ones which are guaranteed to be unique across any table that uses them.  Of course, you know I wouldn’t be writing this if there weren’t a solution.  In SQL Server 2012, Microsoft introduced something called the SEQUENCE object that does this very thing.

    ENTER THE SEQUENCE…

    There are two steps involved here.  First we create the SEQUENCE object, and then we use it when we insert records.  So, let’s say I am creating a database to track my cigars.  I have two tables, one tracks my Cubans and the other tracks all the others.  I need to keep my Cubans in a separate table so that if the Bureau of Alcohol, Tobacco, Firearms and Explosives ever investigates my system I can easily drop that table.  (Have you ever wondered what Alcohol and Tobacco have to do with Firearms and Explosives?  I guess that’s a topic for a different kind of blog.)   Anyway, I want to number my individual cigars and I want them to be kept in order, whichever table they are in.  Sounds like a job for a SEQUENCE object.  So, let’s create the tables:

    USE CIGARS
    GO

    CREATE TABLE stogies.Legal (CigarID int, Brand nvarchar(50), Rating tinyint)
    CREATE TABLE stogies.Cuban (CigarID int, Brand nvarchar(50), Rating tinyint)
    GO

    Now, let’s create the SEQUENCE .  Of course you can open Management Studio and use the GUI, but that’s no fun; we want to write some code.

    CREATE SEQUENCE stogies.CountEm AS int
    START WITH 100 INCREMENT BY 10
    GO

    So now, all that’s left to do is insert some records.  To insert a value into my CigarID column, I will use the NEXT VALUE FOR function to generate a new unique number.  It will look something like this:

    INSERT stogies.Legal VALUES (NEXT VALUE FOR stogies.CountEm, ‘MonteCristo’, 7)
    INSERT stogies.Cuban VALUES (NEXT VALUE FOR stogies.CountEm, ‘AVO’, 10)
    INSERT stogies.Legal VALUES (NEXT VALUE FOR stogies.CountEm, ‘La Gloria’, 8)
    INSERT stogies.Cuban VALUES (NEXT VALUE FOR stogies.CountEm, ‘Hoyo de Moterey’, 9)

    Now, if I query the two tables, I get the following output:
    SELECT * FROM stogies.Legal

    CigarID

    Brand

    Rathing

    100 MonteCristo 7
    120 La Gloria 8

    SELECT * FROM stogies.Cuban

    CigarID

    Brand

    Rathing

    110 AVO 10
    130 Hoyo de Monterey 0

    Words of caution: according to BOL, “Sequence numbers are generated outside the scope of the current transaction. They are consumed whether the transaction using the sequence number is committed or rolled back.”  In other words, they are generated anytime you call the NEW VALUE FOR function, and never re-used.  That is true even if you do this (which would return 140):

    SELECT NEW VALUE FOR stogies.CountEm

    OK, there is one major error in this article, and it has nothing to do with coding or sequencing.  If you find the error, send me an email (jeff.rathjen@quickstart.com) and you will win a brownie point.  Or better yet, come see me in class sometime.

  • Top 5 Commands in PowerShell

    by Marigold Legaspi | May 13, 2015

    Which commands are you always going to need to use, and will use with every new command or module as you write PowerShell?

    1.  Get-Help.  You use this to read the help file for a cmdlet:  Get-help <cmd>.  So you see a script that uses Test-connection, and you want to see what you can do with it: get-help test-connection.  If you want to see it in a separate window, in PowerShell v3 and higher:  Get-help Test-connection –showwindow.  This opens full help in a separate window.  You can limit what you see to just examples by changing the settings (upper right hand corner of the window).  Get-help has other parameters:  -examples, -parameters, -full or –detailed, which result in help being shown to you in your PowerShell session.  Online help (use the –online parameter) will open a browser window to the proper MSDN help page.
    2. Get-Help about*.   This is your tool for learning more about PowerShell concepts and constructs.  You can read the help files about_Command_Syntax, which tells you how to read help files.  Or about*operator*  which shows you there are about topics for type operators, comparison operators, and others.  One of the more inspirational help files in v3 is about_workflows.
    3. Get-command.  Get-command allows you to find commands, using wildcards, or using the –verb and –noun parameters.  You could run: get-command –verb get –noun *serv*.  Or you could start investigating all the commands in a module you haven’t used before:                                   get-command –module smbshare.  This command will tell you in a tabular layout the name of the command, the command type: function, alias, cmdlet, and the module it comes from.  You can also use get-help *servi* to see commands about services; this shows you the command name, the type, and a synopsis (the short description of what it does).
    4. Get-member.   You pipe the output of a command to get-member to find out what properties you might want to use in a report.  Get-member also shows you the .NET object type name, the methods, and events for that type of object.  You can use any property in a select-object or format command following your initial command.  For example, get-service | get-member shows you the properties, events, and methods of a System.ServiceProcesss.ServiceController object, including DependentServices, DisplayName and MachineName. You could decide to run the following: get-service –computername lon-dc1 |select machinename, displayname, dependentservices.
    5. Format-list –properties *.   If you see a long list of properties, are the values in a format you want to use?  Or are they in a format you will need to convert… like ticks instead of datetime? For example, you want logon information about users:

    You run: get-aduser –filter * -prop * |select-object –first 1| format-list –property *  OR

    Using shortened parameters: get-aduser –filter * -prop * |select –first 1 | format-list *.

    You can see that there are lastlogon times that are datetime format, and some that are in ticks.  You can decide to select only the properties that are in datetime format.

     

    In this blog, you’ve seen the 5 commands you will use again and again no matter how experienced you get with PowerShell.  These commands allow you to learn more, on your own, about PowerShell and the information you retrieve.

  • Microsoft Report Builder 3.0

    by Marigold Legaspi | May 13, 2015

    With the release of SQL 08 R2Microsoft is making Report Builder 3.0 available. It has been a long road from the original Report Builder 1.0. It was a little clunky, not very intuitive, and was dependent on Report Models. Report Models are an abstraction of the data, and are generally designed by the BI Developer to expose data to the Business User for report creation.

    While there are some benefits to using Report Models, (You can give fields and views logical names, group data logically, define relationships where none existed, and use them to enforce security to name a few), it still falls on the BI Developer to create them.

    With the advent of Report Builder 2.0, we are no longer dependent on Report Models. Users can connect directly to the data source. The beauty of Report Builder 2.0 was that the clunky interface of Report Builder 1.0 was replaced by a much more intuitive “Office-like” interface. This gives users familiar with the standard Office 2010 interface a head start in understanding the environment. The remaining challenge is that they understand the data that they are working with, but there isn’t much we can do about that here.

    Report Builder 3.0 picks up where 2.0 leaves off. It keeps the same intuitive interface, the ability to connect directly to various data sources, but it brings in some new features and functionality that really elevate this product. While it is still designed to be accessible to the Business User (or Information Worker, Microsoft is very careful about not describing users of Report Builder as End-users), the functionality available makes it a viable tool for experienced Report Developers.

    Let’s look at some of the new functionality in Report Builder 3.0.

    Report Parts

    Report parts are report items (charts, graphs, text boxes, etc,…) that you store on a report server, or on a SharePoint site that is integrated with a report server. You can now reuse these report parts throughout your environment in multiple reports. This allows dev groups to utilize the individual strengths of the team members.

    When you add a report part to your report, it maintains a relationship to the original instance on the site or server. You can then modify the report part independent of the original report part object. After you have modified your version of the report part, you can save the modified report part back to the site or server, either adding a new report part, or over-writing the original.  When someone modifies the original report part on the site or server, you can choose to accept the modification to the report part in your report, or decline it.

    Shared Data Sets

    A shared dataset provides a way to share a query to help provide a consistent set of data for multiple reports. These data sets can connect to various external data sources, and can include parameters. You are able to configure the data set to generate cached results for a specific parameter combination using a schedule, or just specifying first use.

    As with Report Parts, the shared dataset maintains a relationship with the original instance of the data set on the site or server.

    Data Sources

    In addition to the wide variety of data sources available, Report Builder 3.0 has added a few new data sources.

    Report Builder 3.0 also exposes data from SQL Azure, allowing reports to be built from data bases in thReport Builder 3.0eCloud. Data in SharePoint lists can be exposed using the SharePoint List Extension and we can expose data from the Microsoft SQL Server Parallel Data Warehouse as well.

    Maps

    Report Builder 3.0 adds a Map Wizard, and a Map Layer Wizard as well. A map layer displays map elements based on spatial data from a map in the Map Gallery, from a SQL Server query that returns SQL Server spatial data, or from an Environmental Systems Research Institute, Inc. (ESRI) shapefile.

    This allows us to build reports that display data through a geographical interface. You can add interactive features such as tooltips and drillthrough links, or provide parameters that enable a user to interactively control the visibility of each layer.

    Report Builder 3.0 also comes with some additional BI type tools, such as sparklines, databars, and indicators. These tools allow us to convey a lot of information, in a relatively small space by displaying a graphical icon to represent large volumes of data.

    The more time you spend in Report Builder 3.0, the harder it is to look at it as a tool for non-technical users. It has become a very robust BI tool that every BI Developer should take advantage of.  While it is doubtful (to me) that the average Business User will take advantage of all the robust functionality of the product, it is still a benefit to expose this product to them as an ad-hoc report tool, even if they never take it out of second gear. I think Report Builder has made huge strides in becoming a reporting tool that is available to all, yet still has the functionality that only the BI Developers are going to fully appreciate.

    Larry Heppelmann, Senior Systems Engineer, QuickStart Intelligence

  • PowerShell Credentials and SecureStrings, Part II

    by Marigold Legaspi | May 13, 2015

    In my first blog post on using secure passwords in PowerShell scripts, I discussed methods in PowerShell to keep passwords and other secure strings secure in memory and protect against memory dump attacks and the like.  I showed how to use those secure strings to run commands either an alternate username and password if the commands ask for those or by building and using PSCredential objects for those commands that expect a full credential objects.  At the end of the blog I showed how to convert that secure string to a regular encrypted string using the Windows Data Protection API (DPAPI).  While highly secure, that method of encrypting the string is limited to that one user account on that one computer by the way it uses the Windows DPAPI.

    In this blog post I will present some of the security considerations we want to be aware of when configuring security for scripts, especially automating their execution.

    Defense-in-Depth

    Let me first take a moment to comment on security in general.  There is a term that’s used, “security by obscurity”, and that security by obscurity is insecure.  There is sometimes confusion about what this means.  Every security system relies on obscurity in some way.  Even, for example, the very good (but not perfect!) Kerberos authentication system.  For example: to log on to a Windows domain, you have to supply a username and a password.  The whole system relies on that password be “obscure”, known only by the human responsible for the password.   You could also use a smart card and certificates as part of the authentication process.  This generally more secure as the obscurity of the authentication is even higher.  You have to have a smartcard or a physical copy of the smartcard with the appropriate certificates and a PIN.

    This obscurity is not what the “security by obscurity” refers to.  The bad form of obscurity is when the obscurity that is required is that of the implementation itself, and to break the security one only needs to know the details of how it was implemented.  For example:  Storing a password in plaintext somewhere.  Having your scripts read the password from the file is only secure as long as the location of the file is obscure (secret).

    No security system is perfect, and they all have their flaws and situations where they are effective and not effective.  Defense-in-depth is a design methodology that in principal attempts to increase security by using multiple security systems.  This prevents breaking the security by exploiting the weakness of just a single system.  In the previous blog post, I showed you how to encrypt the password in memory (SecureString) and write an encrypted form of the password to a file.  Knowing where the file is does not help as the password is encrypted.  The encryption acts as a second security system.  If the encryption is good enough, it will be easier for an attacker to break the security by doing other things like cracking the password using various attacks, social engineering, physical espionage (e.g. hiding cameras to capture you typing the password), or rubber-hose cryptanalysis to force you to provide the password or keys.

    What are we trying to protect against?

    Let’s narrow the discussion back to securing strings.  You might have various things you need to protect and secure strings for, but the most common and obvious are passwords.  These are passwords to accounts that have privileges to perform actions on our computer systems that we want to limit access to because those or similar actions done at the wrong time or at all could be damaging to the system or your organization.  A highly secure method then would always prompt a real human for a password.  Unfortunately there are systems, interfaces, and situations where we want or need the computer to run a script automatically on a schedule or in response to some event.   The end result we are looking for then is to prevent both someone gaining access to the actual password or someone using an intermediate to execute their own scripts without evening knowing the password.

    Protecting Stored Passwords

    We have seen how to protect passwords in memory using SecureStrings.  Using ConvertFrom-SecureString with no parameters uses the Windows DPAPI to secure a password for one account on one computer.  We have a few options for securing passwords for multiple people to use or on multiple computers.  They all have their drawbacks.  The other two options to theConvertFrom-SecureString either will not allow for automation or involves a loss of security protecting the password.  It’s still better than nothing at all.  In the next blog post, I will go into detail on those options and other choices we can still use with just Microsoft tools.

    Password security is not an easy thing and trying to create your own system is likely to end in false security.  Thinking you have a secure solution, but not having one in reality, is the real risk.  Perhaps the best option is to look into 3rd party solutions such has password or key storage databases (open source PowerShell functions from Microsoft ScriptCenter that uses the Windows Credential Manager), script encryption programs like PShellExec, or custom schedulers with built in support for this, such as JAMS Enterprise Scheduler.   Open source can be a real solution for this as well.  Cryptography and security are computing areas where open source and open methods and algorithms have been used for decades.  If everyone can see the methodology and after years of it being open, chances are the solution is good.

  • Microsoft Operations Framework (MOF) v4

    by Marigold Legaspi | May 13, 2015

    Wait, what’s a MOF?  Microsoft Operations Framework?  Why would I need that?

    In the 1980’s there was a movement to create standards for managing IT within the British government.  The result was the Information Technology Infrastructure Library (ITIL) was initially published as a series of books between 1989 and 1996.  It documented IT best practices without regard to platform, nationality, industry or size of organization.  The sponsoring organization, the Central Computer and Telecommunications Industry, (CCTA) enlisted experts from various telecom and computer companies to write and edit the guidance.  Microsoft was one of those companies.

    It was the intent of the ITIL sponsors that ITIL be adapted and adopted by software companies, industries, etc, since it deliberately left out platform or industry specific guidance.  Since the guidance did not cover Microsoft specific best practices, Microsoft realized that it needed to provide that guidance.   Microsoft began developing generic best practices guidance based on its own internal best practices, the best practices of its consulting arm, of its customers, and combined that with ITIL guidance.  The result was the Microsoft Operations Framework.  Both ITIL and MOF had as an underlying principle that IT should serve the business, and should provide services that the business needs.

    At that time, Microsoft had already published its own best practices software development guidance, called Microsoft Solutions Framework (MSF).  MSF had three key elements: it used a lifecycle approach, it embedded risk management into every phase, and it used a team model to assign responsibility, to hold members accountable, and to foster clear and open communication.  These three elements were key in the original development of MOF.

    As the industry continued to evolve, ITIL continued to revise its guidance.  ITIL v2 was released in 2000/2001, in the form of eight books; the red and blue books were the most frequently referred to: Service Support and Service Delivery. 

    MOF v3 was heavily dependent on ITIL v2.  It used the ITIL materials, and presented them in a lifecyle, four quadrant approach.  The main differences between ITIL and MOF at that time were the Management Reviews embedded in the lifecycle at the end of each quadrant, the addition of operations guidance for the Microsoft environment which ITIL had deliberately left out, and the Risk and Team models mentioned above.

    In May 2007, the new version of ITIL was released: version 3.  As Microsoft had continued to participate in the evolution and rewriting of the ITIL guidance, it was aware of the changes, and it worked on revising MOF.  However, for MOF v4, it decided to diverge from the ITIL approach; the new and current MOF was published in 2008. It is still based on a lifecycle approach which has three phases –Plan, Deliver, and Operate, and a foundational layer – Manage.   Moreover, MSF has been integrated into MOF – the MSF phases (Envision, plan, build, stabilize, and deploy) are now the elements of the MOF Deliver phase.

    The Manage layer has oversight into and control responsibilities throughout the phases.  This oversight is divided into three Service Management Functions: Governance, Risk and Compliance is the first, Change and Configuration is the second, and the third is   Risk management is now part of the broader Manage layer, as part of the Governance, Risk and Compliance Service management function (SMF).  The third is Team.  There are still management reviews – six of them.  They are Service Alignment, Portfolio, Project Plan Approved, Release Readiness, Operational Health, and Policy and Control.  For those of you familiar with the shortcomings of MOF v3, you can see that the Manage layer, and its focus on coordination and control through its SMF’s has added an element of strategic planning and oversight that was missing from earlier versions.  In addition, as with ITIL v3 which added a focus on the Service Portfolio, there are two new management reviews in the Plan phase – Service Alignment, which begins the process of getting IT on the same page as the business, and Portfolio, which documents the planned services, the offered services (the service catalog) and the retired services.  Many of the SMF’s, such as Change and Configuration, Financial Management, Problem Management and Operations, are largely as they were in MOF v3.  However, the presentation of MOF has changed radically.  Guidance is now shorter, and is question based.  To find out more about MOF go to the MOF home page at:http://technet.microsoft.com/en-us/library/cc506049.aspx  and take a look at the Reliability Workbooks, or the new IT Pro Quick Start Guide.

    I started by asking some questions: Why do you need MOF? If you are using ITIL, you may not need MOF.  However, if you are using Microsoft products, then the MOF question-based guidance poses key questions with regard to the various aspects of IT service management. MOF is the Microsoft variant of ITIL.  If you are not using ITIL or MOF, then you should evaluate ITIL and MOF and start examining how your own best practices resemble and differ from those that have been refined over the last decade. What do you think?

    Thank you!

    Saskia Schott, Systems Engineer, QuickStart Intelligence

  • SharePoint Designer Governance

    by Marigold Legaspi | May 13, 2015

    I hear these and many similar comments and questions all the time in my SharePoint Site Administrator/Power User classes:

    • “Oh, I can’t do that because my company won’t let us use SharePoint Designer.”
    • “I’d like to create a workflow, but no one is allowed to use SharePoint Designer.”
    • “How do I get my company to let me use SharePoint Designer?”

    These questions and comments all relate to SharePoint governance.  SharePoint governance is a very important consideration for any SharePoint deployment, one that from my experience a lot of organizations cut corners (or don’t even realize what they should be doing), to the detriment of their SharePoint deployment’s success.  SharePoint governance is a big subject; more than one blog post can cover.  Of all the subjects and classes we teach here at QuickStart (from CitrixCiscoVMWare, to Microsoft SQL Server,Windows ServerExchange, etc.), there is only one product we have a dedicated class for: SharePoint.  I present this as evidence for the need for SharePoint governance, without which has led some people to call SharePoint a “virus”.  (Go Google or Bing “SharePoint” and “Virus” if you don’t believe me).

    In this blog post, I’m going to discuss a specific SharePoint governance issue, that of SharePoint Designer.  What is SharePoint Designer, why do I care, who should have and use and for what, and what can and should be in a governance plan to make best use of SharePoint Designer will be the questions answered in this post.

    What is SharePoint Designer?

    SharePoint Designer is a tool that developed out of Microsoft FrontPage (back when it was still and active product) that Microsoft eventually made free to anyone using SharePoint.  Due to the way that SharePoint stores files in SQL Server database, most web development tools cannot work with SharePoint content so a SharePoint aware design tool is necessary.  Over time, Microsoft has added more and more functions into SharePoint Designer, so it is not just a web customization and design tool.  SharePoint Designer 2010 allows Site administrators to, among other things:

    • Customize SharePoint pages, including applying cascading style sheets, add web part zones, remove or change formatting
    • Customize and apply custom master pages to change page layouts
    • Backup and restore sites (limited, should not be used as a true backup solution)
    • Add DataView web parts to SharePoint pages (used to display external data content)
    • Design SharePoint Workflows
    • Create sites, lists, libraries and change site structure
    • Configure security settings
    • Create External Content Types using Business Connectivity Services

    Many of the new features added to SharePoint Designer increase the potential audience for the tool.  It is a very useful tool that can make administration of sites (security, creating and managing lists and libraries) faster and easier.

    Why do I care about SharePoint Designer governance?

    SharePoint Designer is, however, a tool that can be dangerous if users do not know what they are doing.  The experience of SharePoint 2007 and SharePoint Designer 2007 showed that users can make sites difficult to use, waste time changing colors and look and feel, create inconsistent navigation, or even completely breaking pages and sites so that they cannot even load.  This resulted in a lot trouble for other users, IT, help desks, and contributed to resistance and negative opinions of SharePoint some users gained.  Many organizations have since completely banned or restricted SharePoint Designer use to IT only.  The risk to this approach is losing the benefits that SharePoint Designer can provide to their organization.

    Who should use SharePoint Designer and what should they be using it for?

    Users who administer sites (create list and libraries, customize web parts on pages, set site security) or business users who already work with data or set business process rules could be beneficial users of SharePoint Designer.  The biggest risks with SharePoint Designer are those things related to customizing pages, not site administration, incorporating external data, creating and using lists and libraries.  Not allowing users to do these things with SharePoint Designer risks having SharePoint be treated as a web-based file server and project tracker that can take a lot of work in the Web UI to make useful.  By letter users build simple team or department workflows, display and incorporate data from external business systems, administer security and customize web parts quickly, they can build integrated solutions that users want to use, rather than SharePoint sites that are too “SharePointy”, as I’ve heard them described.  SharePoint sites that are too restricted to just storing list items and documents.

    How can I gain the benefits and yet governing SharePoint Designer usage to minimize risk?

    The discussion invariably leads to how to govern SharePoint Designer but still reap the benefits of using it.  Some organizations create teams of SharePoint Designer experts who handle requests from users for changes and features.  Some deploy SharePoint Designers to their site designers and site administrators in the company.  Banning SharePoint Designer is easy and safe and some administrators have been burned for allowing it.

    Fortunately, Microsoft saw the problems (and likely experienced some of them first hand in their own deployments), and provided some governance tools for us:  At the site collection or web application levels, administrators can restrict what is allowed with SharePoint Designer.  You can prevent users from customizing pages from the site definition (but still allow web part customization), prevent master pages customization, and prevent users from changing site structure (but still allow creation, modification, and deletion of lists and libraries).  This allows an organization to deploy SharePoint Designer to approved users, but limit those users to the most useful and less dangerous capabilities (Administration of security and managing list and libraries, creating workflows, BCS External Content Types, advanced formatting of lists, etc).

    No governance can work with just computer based tools.  Governance starts and ends with good, logical, and well-designed governance policies in place and communicated clearly to those people they govern.  A good governance policy for SharePoint Designer is one that identifies who can benefit from SharePoint Designer, establishes rules and guidelines for using it, and incorporates a training aspect to the policies.  You wouldn’t let an employee drive a corporate vehicle without some proof they can operate it successfully, so don’t give users SharePoint Designer without some procedure to validate they know your policies and the software.  I’ve seen organization build a computer based training program with videos and examples, then use SharePoint surveys to quiz the users, with a final test of sample site with tasks for the users to perform.  An approver then reviews the site and approves (possibly through a SharePoint workflow) the user to have SharePoint Designer installed for them.  Third party training certifications have also been used in the past to meet this requirement.  Don’t forget that governance policies will vary from site to site.  A site like an employee HR site used by everyone in the company, you may not let anyone outside IT use SharePoint Designer with it, but a project team site it may be highly appropriate for a project manager to use SharePoint Designer.

    If you have need of more information about SharePoint Governance or SharePoint Designer, or want to train your users to use SharePoint Designer, QuickStart had extensive 2 day classes for both of the subjects.

    Thank you!

    Steven Randell Allen, Systems Engineer, QuickStart Intelligence

  • Top 5 PowerShell scripting sources

    by Marigold Legaspi | May 13, 2015

    In a recent class, I pointed out to students some of my favorite scripting sources, and found that some students had found one or two, but not all my favorites. So, since PowerShell is all about finding the right script, here are my top five websites for PowerShell scripts. And, by the way, the right script might be the one that does exactly what you need, or it could be the one that does something very close to what you need, and you can modify.
    These scripts are not in ranked order, in that the first is the _very_ best, instead, I’ll put them down as they come to me.

    1. PowerShell.com This web site not only has scripts, blogs, snippets, but also one of my favorite ways of continuing to learn PowerShell – the PowerShell script of the day. Tobias Weltner has been working with PowerShell for a long time, and this site is rich in resources, including some free pdf books on PowerShell administration and remoting.
    2. Poshcode.org This web site is great in that there are thousands of scripts and people keep on adding scripts and improving them. Joel Bennett and Lee Holmes have both contributed lots of content on this site, to only name two well-known PowerShell gurus.
    3. Codeplex.com This web site has a great SharePoint install script written in PowerShell (AutoSPInstaller.codeplex.com) , that will install everything your farm needs, (except PowerPivot) by using the information you configure in an xml file. Since correctly installing all the components of a SharePoint farm is a serious challenge, this contribution alone makes codeplex stand out. But, wait, it also has sub sites for SQL PowerShell community extensions, and much, much more. HOWEVER, I have found that going to codeplex (www.codeplex.com) and using their ‘Search’ box doesn’t result in as many, or as relevant entries, as using my favorite search engine with the keywords: PowerShell codeplex.
    4. The Microsoft scripting center. Long a resource for administrators, this site not only has PowerShell scripts, but also VB scripts.
    5. Your favorite search engine. Since so far I’ve been catching and preparing your fish for you, now I want to teach you to fish: When using your favorite search engine, type in PowerShell, then the Microsoft product you are interested in, if any, and then the rest of the query you have. For example: PowerShell monitor and send to csv file.

    I hope you find what you’re looking for. You’ll usually save time if you spend a little more time searching for existing scripts, than if you try to build it from scratch. Even if you just break your task into pieces and look for scripts that you can use and patch together, you will often save time.

  • 3 ways to Monitor with PowerShell

    by Marigold Legaspi | May 13, 2015

    A student in a recent class asked for a script that could take every command a user typed into PowerShell and send it to a csv file for monitoring purposes. That prompted me to think about the choices you have with PowerShell. He didn’t want to capture history after the fact, with a ‘get-history’ command, since the user could easily have run ‘clear-history’ resulting in the removal of the commands they had run to that point. That made me think about what his choices were. His chosen alternative was to send events to the PowerShell event log, however, when monitoring anything, you could also decide to send data a SQL database. My thanks to David W for his question, and his eventual script, which you can see below.

    With PowerShell, you can easily export your data to a csv file. For ease of demonstration, I’ll use a simple command:

    Get-wmiobject win32_bios –computername (get-content servers.txt) | export-csv serverbios.txt. That’s not so hard. But, how do you get PowerShell to take each command, as you type it, and send the command to… well, wherever you choose?

    We went to the web and found a script on www.powershell.com that changed the function, ‘prompt’. http://powershell.com/cs/blogs/tips/archive/2012/11/26/logging-input-commands.aspx. So now, each time the user hit enter, and PowerShell ran the command and offered the next prompt, the command was sent to a .txt file. Simply modifying the script so that it was sent to a csv file was easy. Here is what the original looked like:

    function prompt
    {
      'PS> '
      $Host.UI.RawUI.WindowTitle = Get- Location
      if ($global:CmdLogFile) {
          Get-History -Count 1 |
            Select-Object -ExpandProperty CommandLine |
            Out-File $global:CmdLogFile -Append
      }
    }

    To enable logging, set $global:CmdLogFile to a path, like this:

    $global:CmdLogFile = " $env:temp\logfile.txt"

    The problem with this script is that it doesn’t record the server name, username, data and time. So, the script was modified to log to the Windows PowerShell log. As he pointed out: It simply logs anything the user has typed in. Each time you hit enter in PS, it records a new row in the event log. Nice thing is that it will capture whatever the user typed, whether it be a .ps1 name, a cmdlet name, or a .NET function call. By using an event, we don’t need to put the server name, user name, or date/time in the event description (we get that for free).

    function prompt
    {
      "PS $($executionContext.SessionState.Path.CurrentLocation)$('>' *($nestedPromptLevel + 1)) "
      $InstanceID = $(Get-Host).InstanceId
      $xml = "<PSData>`n"
      $xml += "<cmd>$(Get-History -Count 1 | Select-Object -ExpandProperty CommandLine)</cmd>`n"
      $xml += "<InstanceID>$InstanceID</InstanceID>`n"
      $xml += "</PSData>"
      Write-EventLog -logname 'Windows PowerShell' -source PowerShell -eventID 9000 -entrytype   Information -message $xml -Category 8
    }
      $auditLogEntry | Add-Member NoteProperty DateTime $date
      $auditLogEntry | Add-Member NoteProperty ComputerName$env:COMPUTERNAME
      $auditLogEntry | Add-Member NoteProperty UserName $env:USERNAME
      $auditLogEntry | Export-Csv 'C:\temp\test.csv' -NoTypeInformation –Append
    }

    So, that’s pretty cool right? How could that possibly be improved upon you wonder? Ah, well, what about adding the ability to log what the commands of a called PowerShell script is? So, here is the updated version:

    function prompt
    {
      "PS $($executionContext.SessionState.Path.CurrentLocation)$('>' *($nestedPromptLevel + 1)) "
      if($commandtext.Length -gt 0 )
        {
            $InstanceID = $(Get-Host).InstanceId
            $commandtext = Get-History -Count 1 | Select-Object -ExpandProperty CommandLine
            $data += "Command Text = $commandtext`n"
            $data += "Host ID = $InstanceID`n"
            if($commandtext.ToLower().contains('.ps1'))
            {
              $ps1FileName =$commandtext.Substring(0,$commandtext.indexof(".ps1")) + '.ps1'
              $result = test-path $ps1FileName -ErrorActionSilentlyContinue
            if($result)
            {
              $scriptContents = Get-Content $commandtext -ErrorActionSilentlyContinue
              $data += "Script Contents = $scriptContents`n"
            }
          }
        Write-EventLog -logname 'Windows PowerShell' -source PowerShell -eventID 9000 -entrytype Information -message $data -Category 8
      }

    Powershell-event-properties
    Now that’s using the power of PowerShell!
    As an alternative, if you aren’t capturing PowerShell commands, but want to monitor, I’d like you to take a look at the blogs of Laerte Junior, a SQL Server MVP. He has blogged on Creating a monitoring server for SQL Server with PowerShell, https://www.simple-talk.com/sql/database-administration/create-a-monitoring-server-for-sql-server-with-powershell/, as well as Gathering Perfmon data with PowerShell, https://www.simple-talk.com/sql/database-administration/gathering-perfmon-data-with-powershell/.
    Check out his other blogs at: https://www.simple-talk.com/author/laerte-junior/
    Here’s to your successful monitoring! You can monitor to a csv or text file, to a Windows Event Log, or to SQL Server.

  • Auditing Tools in SQL Server 2008 and SQL Server 2008 R2

    by Marigold Legaspi | May 13, 2015
    This is the third article in a series discussing database auditing. The first article discussed the need to audit from various governmental mandates and best practices of auditing just the data you need to meet requirements.  The second article discussed the auditing log data best practices, both what event data to include in your audit logs and the need to protect your audit log systems and data from tampering.  This article will look at the various tools that we can use for auditing and the pros and cons of various tools that SQL Server 2008 provides to us administrators and developers to enable auditing compliance.

    SQL Server contains many tools that can be used for various types and levels of auditing:

     

    Auditing Feature

    Version Introduced

    Advantages

    Disadvantages

    DML Triggers <= SQL 2000
    • Available in all editions and versions of SQL Server
    • Allow for fine-grained auditing
    • Can use SQL Server Reporting Services to easily report
      • High overhead, auditing is part of transaction
      • Implementation cost is high, need to implement separate trigger for each table
      • Can only be used for INSERTs, UPDATEs, DELETEs
      • Low security: can be disabled by db_owner to bypass
    SQL Server Profiler <= SQL 2000
    • Available in Standard Edition
    • Allow for fine-grained auditing
    • Can audit all types of events
      • Low-medium performance overhead with server processed events (recommended for auditing)
      • Need to run a tool outside of SQL Server
      • Risk of database activity without auditing running
      • Difficulty reporting depending on audit log location (SQL table vs. trace file)
    SQL Server Server-Side Tracing <= SQL 2000
    • Available in Standard Edition
    • Allow for fine-grained auditing
    • Can audit all types of events
    • Much more difficult to disable than DML triggers or SQL Server Profiler and disabling can be auditing
      • Low-medium performance overhead
      • Implementation cost is high to setup, configure, and make sure it is running all the time
      • Difficulty reporting depending on audit log location (SQL table vs. trace file)
    C2 Audit Mode SQL 2000
    • Easy to configure
    • Meets federal C2-level security audit specification
    • Lots of detail in the audit log
    • If auditing cannot occur, will stop server
    • Can log reads as well as changes
      • All or nothing, logs every activity on the server
      • Can easily become too much data
      • Extra work required to report from data
      • Many extend stored procedures do not work with C2 auditing enabled
      • Includes more than just user access data
    DDL Triggers SQL 2005
    • Can audit for schema changes
    • Fairly easy to implement, especially with event groups
    • Disabling is audited

     

    • High overhead, auditing is part of transaction
    • Can only used for DDL statements
    • Can be disabled without to much trouble
    Event Notifications SQL 2005 (SP1 for a security fix)
    • Allows for fine-grained auditing
    • Low overhead, auditing is asynchronous, but:
    • Highly reliable, using the SQL Server Service Broker
    • Most secure auditing mechanism possible if configured correctly and disabling is logged
    • Supports all editions of SQL 2005 and SQL 2008/R2, only requiring one Standard or Enterprise edition for audit log server.
    • Fully customizable reporting possible using SSRS
      • Can be difficult to setup, especially in the more secure scenarios
    SQL Server Audit SQL 2008
    • Easy to configure
    • Very secure
    • Choice of log locations (file – easily queried and/or placed in SQL table, or either Windows Application log or the Security Log
    • Can integrate almost seamlessly with other auditing measurement and reporting systems that collect from the Windows Security Log
    • Low overhead
      • Possibly not as secure as some Event Notification setups
      • Requires SQL Server 2008/R2 Enterprise edition (or greater). Won’t work with Standard or less or with SQL Server 2005.

    I hope you found this article looking at the available auditing tools in SQL Server 2008 interesting and useful.  In the next articles in this series, we will look at the two most powerful, secure, and flexible auditing methods in SQL Server 2008, SQL Server Audit and Event Notifications.  Following those we will look any changes to auditing that SQL 2012 provides us.

    Has this helped you and what do you think?

  • PowerShell Credentials and SecureStrings, Part III

    by Marigold Legaspi | May 13, 2015

    In my first blog post on using secure passwords in PowerShell scripts, I explained how the .NET Framework uses Strings and a new object type, SecureStrings.  SecureStrings have several features that protect against attacks to read string values from memory, not the least of which is that the string is encrypted from the start in memory.  I then introduced the Get-PSCredentialcmdlet that will prompt for a username and password and store that password in a SecureString property, with the property incidentally named “Password”.  An alternative discussed there, when only a SecureString is needed and not a full credential, was the Read-Host cmdlet with the –AsSecureString parameter.

    These two cmdlets, Get-PSCredential and Read-Host work great when running scripts interactively as they prompt the user running the script for the necessary values, but will not work if you want to run the script as a schedule process.  In the case of scheduling a script, the password will need to be stored with the script.  The ConvertFrom-SecureString allows you to take a SecureString object and convert it to an encrypted string that can be written to a file usingSet-Content.  The ConvertTo-SecureString cmdlet then can be used to convert from an encrypted string, usually read from a file, back into the SecureString to be used with PSCredential objects or cmdlets that take a SecureString directly.

    ConvertFrom-SecureString offers three ways to take a SecureString and convert it to a normal encrypted String that can be written to a file.  I discussed the default behavior of the cmdlet that encrypts the SecureString using the Windows data protection cryptographic API.  While a very secure option, it is limited to the user that runs the cmdlet on the same computer that they encrypted it.  In this post, I will discuss options for securing passwords in scripts/files that do not have that limitation and still allow you to run your PowerShell scripts on an automated schedule.  This will include the other two encryption options of the ConvertFrom-SecureString.

    –Key and –SecureKey Parameters

    ConvertFrom-SecureString and ConvertTo-SecureString have two parameters that you can use to change the default behavior.  These parameters are –SecureKey and –Key.  The –SecureKey parameter takes a SecureString object and the –Key parameter, a byte array (Byte[])

    You use –SecureKey with a SecureString memory object that encrypt and decrypt the other SecureString.  This does not really help us in our goal of storing secure passwords and accessing them through automation.  You would need a SecureString object built, either by unencrypting another string using another option, or by having someone type the SecureString interactively.   The first does not solve the problem of it being secured, the second does not solve the automation problem.

    The –Key parameter allows you to use a 128-bit (16-byte), 192-bit (24-byte), or 256-bit (32-byte) key and uses the Advanced Encryption System (AES) cipher, also known as the Rijndael cipher, to encrypt/decrypt the SecureString.  It is symmetric encryption so you need to provide the same key for encryption as you do when you decrypt the encrypted string back to the SecureString.  One way to do this is to embed the key in each script file.  Not surprising, this not recommended and results in security not much more secure than just storing the password in plain text in the script.  It also makes it difficult to change the key or password which should due frequently using this method.  A better choice is to store the key in a separate file from the script and encrypted password.

    Use NTFS permissions to secure the file with the key so that only the users you want to use the key and access it.  Even better would be to use Encrypting File System to encrypt the key file and share to only the user accounts that should be able to access it.  You can then create a new file with a new key whenever you like and use the key to re-encrypt the password.  Of course, re-encrypt the password to a new password whenever the password changes.  To increase the security (a little, as it’s a “security by obscurity” trick), store the password file and key file in separate locations.  Here is what that code my look like (notice at the end we get a SecureString back):

    PowerShell Capture password as Secure String & Encrypt

    You should always get a SecureString back, but if the wrong key was used, then the password will be wrong and authentication in your scripts will not work.  This is exactly what you want.

    Other Options to Protect Passwords

    Of course, this is less secure than still other solutions.   As mentioned in the 2nd blog post on this subject, 3rd party solutions can often be easier and more secure to implement.  Another option is to use Public Key Infrastructure (PKI) to encrypt the key file.  This method uses Public Keys and certificates to encrypt the key file.  You would encrypt a new copy of the key file for each user that would need to access the key to decrypt the password.  To use this method you would use objects in the .NET Framework and trusted certificates.  This is a more complicated solution requiring a built out PKI infrastructure.  I may present that method in another blog post.

    Additional Considerations

    I would encourage you to use the AllSigned PowerShell script execution policy and digitally sign all your scripts.  While this requires additional configuration and obtaining a Class 3 Microsoft Authenticode Code Signing certificate, it can add an important additional security mechanism.  Your password security does not matter much if someone can just alter your scripts that are configured to decrypt the password and run.  They could put in whatever code they wanted to do what they wanted and when your script executes as a job that you have already schedule, then they have accomplished what they want.  If you sign your scripts, PowerShell validates the signature.  Validating digital signatures verifies that the file has not changed or was not signed by a trusted publisher.  If the file has been changed and not resigned then PowerShell will find the script invalid and will not run it.  Denial-of-service attack?  Sure, but it is a lot better than running whatever code the attacker wants.

    I hope you have found this series of blog posts useful and interesting.

  • PowerShell Credentials and SecureStrings

    by Marigold Legaspi | May 13, 2015

    I frequently get asked questions in my PowerShell classes about providing credentials to commands and scripts, including storing passwords in scripts.  In this blog post I’ll discuss how the .NET Framework handles strings and why that behavior is undesirable for working with passwords and other secure strings of data.  I will also discuss methods of providing passwords, credentials, and other secure strings into our commands and scripts interactively.  At the end I will introduce a method of securely saving credentials so that a script can be executed without prompting for passwords, although the script must be executed by the user who entered the credentials.  In a follow up post, I’ll discuss options when you need multiple people to execute scripts with saved credentials.

    VBScript Security

    With VBScript, or any other Windows Scripting Host language, the most common method of saving credentials was to save the username and password in clear text in the script file.  The danger of this should be obvious: any passwords, usually associated with accounts with elevated or administrator permissions, can easily be read by anyone (hackers included).  Some administrators used Windows Encrypting File System (EFS) to encrypt the file.  That can work well, and access to the file can even be shared to individuals with private key recovery, if the Active Directory environment was configured correctly (Certificate Services + Group Policy Auto-enrollment + Group Policy Credential Roaming + Private Key Archival).  EFS is certainly a viable option for PowerShell, and if the environment is setup, then it’s an excellent option.  If not, PowerShell has additional options.

    PowerShell and .NET Framework String and SecureStrings

    Strings in the .NET Framework (and PowerShell is the .NET Framework scripting language) have two important properties that relate to password security: One, strings are immutable; Two, strings cannot be disposed of by the “user”.  Immutable means that the string, once created, is fixed.  PowerShell syntax makes it seem like the string has changed.  The .NET Framework creates a new string object with the “modified” value and updates your variable reference to the new object.  The old string object that, say, contains your password in clear text, remains in memory until the .NET garbage collector cleans it up (disposes of it).  The second point means that you cannot dispose of the string yourself if you wanted to.  With your string in memory, any memory dump, whether accidental or malicious will contain that password in clear text.

    To help in these situations, in the 2.0 version of the .NET Framework, Microsoft introduced the SecureString class.  SecureStrings have several nice advantages for use with passwords:

    1. They are NOT immutable, therefore can be changed in memory.
    2. They implement a Dispose method, so you can manually dispose of them in script.
    3. Do not store the string in clear text in memory, so a memory dump will not gain someone anything useful.

    PowerShell Credentials

    Many PowerShell cmdlets have a –PSCredential parameter that accepts a PowerShell credential object.  The primary method of building a credential in PowerShell is to use the Get-Credential cmdlet.  Get-Credential creates a graphical prompt to the user asking for the credentials.  The object returned by this cmdlet contains a Password property that itself returns a SecureString.  The password supplied by the user is never stored in clear text in memory.  This object can then be passed to any cmdlet that has such a –PSCredential parameter.  However, not every cmdlet that allows alternate credentials accepts a PSCredential object.  Those cmdlets usually build the credential for us from two separate parameters, a username and a password.  Those passwords need to be supplied as a SecureString object.   This is true even if you are willing to store your passwords in clear text in your script files (and hopefully use something like EFS to encrypt the files).

    Providing a SecureString to PowerShell

    PowerShell contains several cmdlets that let you create or manipulate SecureString objects.  The easiest way to create a SecureString object is to use the Read-Host cmdlet with the optional –AsSecureString cmdlet.  Using this parameter directs PowerShell to obfuscate the string typed into the prompt and return the string as a SecureString.  You can now use this object with any cmdlet that needs a SecureString object.  (See screen shot for example:

    Windows PowerShell

    That cmdlet is great and works in many situations, but not in any situation where you need to run a script automatically through some kind of scheduler, like Windows Scheduled Tasks or SQL Server SQL Agent Service jobs.

    Creating a SecureString from Plaintext

    If you want to save the password in the script in Plaintext so that the script does not prompt when it’s executed, then you can use the ConvertTo-SecureString with the optional –AsPlainText parameter.  Doing so will take a String object, make a copy as a SecureString.  This will leave a String object with your password in memory, making the system vulnerable to a memory dump attack.  This technique should only be used on secure, trusted systems with a high level physical and network intrusion security in place.  Even if you encrypt the file, there is still the risk of memory access to the password when the script runs.  To ensure that administrators are aware of the risk, Microsoft requires an additional parameter when the –AsPlainText parameter is used: –Force.

    Windows PowerShell ConverTo-SecureString

    Creating a SecureString from an Encrypted String

    What is the recommended method of storing passwords and using them in PowerShell?  Well, Microsoft would like us to store our SecureString passwords as encrypted strings into files, and then decrypt them back into SecureStrings when we need them.  To do this, use theConvertFrom-SecureString cmdlet on the output of the Read-Host “Enter Password” –AsSecureString command.  If you use the ConvertFrom-SecureString with no parameters, then PowerShell uses the Windows Data Protection API (DPAPI) to encrypt the password, then you can save it to a file.  Using the DPAPI uses a user encryption key.   Unfortunately, the way that PowerShell uses the DPAPI means that only that user on that computer can decrypt the password in the file.  Make sure you are logged on as the user you plan to run any automated job as when you encrypt the password.  Use the Set-Content cmdlet to write the encrypted password string to file.

    Windows PowerShell AsSecureString Command

    To rebuild the SecureString for use with cmdlets that need the SecureString, use the ConvertTo-SecureString cmdlet on the content read from the password file.  Again, this only works if the PowerShell is run using the user account that was used to encrypt the password.

    Windows PowerShell System.Security.SecureString

    For cmdlets that need a whole credential object, it’s only one more command, using the New-Object cmdlet to create a new credential object.

    Windows PowerShell Image 5

    What about encrypting the password so that multiple users can access it to execute scripts?  I’ll save that situation and other security considerations for later blog posts.  Hopefully you have found this useful for working with passwords and SecureString objects.  The same techniques can be used for any string you need to secure, not just passwords.

  • Sequencing Identity Numbers in SQL Server 2012

    by QuickStart Intelligence | Apr 29, 2015

    Identity columns are nothing new in T-SQL.  They are great when you want a unique number to be automatically assigned every time you insert a new record.  The trouble is that while they are unique within the table, they are probably the same between tables.  Every table with an identity column has a 1 for the first record, a 2 for the second, etc.  Of course you can set the seed and increment for the identity column to something other than (1,1), but you still have to manually track which table has which seed.  Trying to track that is destined for failure.  What we really need is a way to have automatically assigned numbers for new records, but ones which are guaranteed to be unique across any table that uses them.  Of course, you know I wouldn’t be writing this if there weren’t a solution.  In SQL Server 2012, Microsoft introduced something called the SEQUENCE object that does this very thing.

    ENTER THE SEQUENCE…

    There are two steps involved here.  First we create the SEQUENCE object, and then we use it when we insert records.  So, let’s say I am creating a database to track my cigars.  I have two tables, one tracks my Cubans and the other tracks all the others.  I need to keep my Cubans in a separate table so that if the Bureau of Alcohol, Tobacco, Firearms and Explosives ever investigates my system I can easily drop that table.  (Have you ever wondered what Alcohol and Tobacco have to do with Firearms and Explosives?  I guess that’s a topic for a different kind of blog.)   Anyway, I want to number my individual cigars and I want them to be kept in order, whichever table they are in.  Sounds like a job for a SEQUENCE object.  So, let’s create the tables:

    USE CIGARS
    GO

    CREATE TABLE stogies.Legal (CigarID int, Brand nvarchar(50), Rating tinyint)
    CREATE TABLE stogies.Cuban (CigarID int, Brand nvarchar(50), Rating tinyint)
    GO

    Now, let’s create the SEQUENCE .  Of course you can open Management Studio and use the GUI, but that’s no fun; we want to write some code.

    CREATE SEQUENCE stogies.CountEm AS int
    START WITH 100 INCREMENT BY 10
    GO

    So now, all that’s left to do is insert some records.  To insert a value into my CigarID column, I will use the NEXT VALUE FOR function to generate a new unique number.  It will look something like this:

    INSERT stogies.Legal VALUES (NEXT VALUE FOR stogies.CountEm, ‘MonteCristo’, 7)
    INSERT stogies.Cuban VALUES (NEXT VALUE FOR stogies.CountEm, ‘AVO’, 10)
    INSERT stogies.Legal VALUES (NEXT VALUE FOR stogies.CountEm, ‘La Gloria’, 8)
    INSERT stogies.Cuban VALUES (NEXT VALUE FOR stogies.CountEm, ‘Hoyo de Moterey’, 9)

    Now, if I query the two tables, I get the following output:
    SELECT * FROM stogies.Legal

    CigarID

    Brand

    Rathing

    100 MonteCristo 7
    120 La Gloria 8

    SELECT * FROM stogies.Cuban

    CigarID

    Brand

    Rathing

    110 AVO 10
    130 Hoyo de Monterey 0

    Words of caution: according to BOL, “Sequence numbers are generated outside the scope of the current transaction. They are consumed whether the transaction using the sequence number is committed or rolled back.”  In other words, they are generated anytime you call the NEW VALUE FOR function, and never re-used.  That is true even if you do this (which would return 140):

    SELECT NEW VALUE FOR stogies.CountEm

    OK, there is one major error in this article, and it has nothing to do with coding or sequencing.  If you find the error, send me an email (jeff.rathjen@quickstart.com) and you will win a brownie point.  Or better yet, come see me in class sometime.

  • 3 Ways to Monitor with PowerShell

    by QuickStart Intelligence | Mar 25, 2015

    A student in a recent class asked for a script that could take every command a user typed into PowerShell and send it to a csv file for monitoring purposes. That prompted me to think about the choices you have with PowerShell. He didn’t want to capture history after the fact, with a ‘get-history’ command, since the user could easily have run ‘clear-history’ resulting in the removal of the commands they had run to that point. That made me think about what his choices were. His chosen alternative was to send events to the PowerShell event log, however, when monitoring anything, you could also decide to send data a SQL database. My thanks to David W for his question, and his eventual script, which you can see below.

    With PowerShell, you can easily export your data to a csv file. For ease of demonstration, I’ll use a simple command:

    Get-wmiobject win32_bios –computername (get-content servers.txt) | export-csv serverbios.txt. That’s not so hard. But, how do you get PowerShell to take each command, as you type it, and send the command to… well, wherever you choose?

    We went to the web and found a script on www.powershell.com that changed the function, ‘prompt’. http://powershell.com/cs/blogs/tips/archive/2012/11/26/logging-input-commands.aspx. So now, each time the user hit enter, and PowerShell ran the command and offered the next prompt, the command was sent to a .txt file. Simply modifying the script so that it was sent to a csv file was easy. Here is what the original looked like:

    function prompt
    {
      'PS> '
      $Host.UI.RawUI.WindowTitle = Get- Location
      if ($global:CmdLogFile) {
          Get-History -Count 1 |
            Select-Object -ExpandProperty CommandLine |
            Out-File $global:CmdLogFile -Append
      }
    }

    To enable logging, set $global:CmdLogFile to a path, like this:

    $global:CmdLogFile = " $env:temp\logfile.txt"

    The problem with this script is that it doesn’t record the server name, username, data and time. So, the script was modified to log to the Windows PowerShell log. As he pointed out: It simply logs anything the user has typed in. Each time you hit enter in PS, it records a new row in the event log. Nice thing is that it will capture whatever the user typed, whether it be a .ps1 name, a cmdlet name, or a .NET function call. By using an event, we don’t need to put the server name, user name, or date/time in the event description (we get that for free).

    function prompt
    {
      "PS $($executionContext.SessionState.Path.CurrentLocation)$('>' * ($nestedPromptLevel + 1)) "
      $InstanceID = $(Get-Host).InstanceId
      $xml = "<PSData>`n"
      $xml += "<cmd>$(Get-History -Count 1 | Select-Object -ExpandProperty CommandLine)</cmd>`n"
      $xml += "<InstanceID>$InstanceID</InstanceID>`n"
      $xml += "</PSData>"
      Write-EventLog -logname 'Windows PowerShell' -source PowerShell -eventID 9000 -entrytype   Information -message $xml -Category 8
    }
      $auditLogEntry | Add-Member NoteProperty DateTime $date
      $auditLogEntry | Add-Member NoteProperty ComputerName $env:COMPUTERNAME
      $auditLogEntry | Add-Member NoteProperty UserName $env:USERNAME
      $auditLogEntry | Export-Csv 'C:\temp\test.csv' -NoTypeInformation –Append
    }

    So, that’s pretty cool right? How could that possibly be improved upon you wonder? Ah, well, what about adding the ability to log what the commands of a called PowerShell script is? So, here is the updated version:

    function prompt
    {
      "PS $($executionContext.SessionState.Path.CurrentLocation)$('>' * ($nestedPromptLevel + 1)) "
      if($commandtext.Length -gt 0 )
        {
            $InstanceID = $(Get-Host).InstanceId
            $commandtext = Get-History -Count 1 | Select-Object -ExpandProperty CommandLine
            $data += "Command Text = $commandtext`n"
            $data += "Host ID = $InstanceID`n"
            if($commandtext.ToLower().contains('.ps1'))
            {
              $ps1FileName = $commandtext.Substring(0,$commandtext.indexof(".ps1")) + '.ps1'
              $result = test-path $ps1FileName -ErrorAction SilentlyContinue
            if($result)
            {
              $scriptContents = Get-Content $commandtext -ErrorAction SilentlyContinue
              $data += "Script Contents = $scriptContents`n"
            }
          }
        Write-EventLog -logname 'Windows PowerShell' -source PowerShell -eventID 9000 -entrytype Information -message $data -Category 8
      }
    }

    Powershell-event-properties
    Now that’s using the power of PowerShell!
    As an alternative, if you aren’t capturing PowerShell commands, but want to monitor, I’d like you to take a look at the blogs of Laerte Junior, a SQL Server MVP. He has blogged on Creating a monitoring server for SQL Server with PowerShell, https://www.simple-talk.com/sql/database-administration/create-a-monitoring-server-for-sql-server-with-powershell/, as well as Gathering Perfmon data with PowerShell, https://www.simple-talk.com/sql/database-administration/gathering-perfmon-data-with-powershell/.
    Check out his other blogs at: https://www.simple-talk.com/author/laerte-junior/
    Here’s to your successful monitoring! You can monitor to a csv or text file, to a Windows Event Log, or to SQL Server.

  • Top 5 PowerShell Scripting Sources

    by QuickStart Intelligence | Mar 25, 2015

    In a recent class, I pointed out to students some of my favorite scripting sources, and found that some students had found one or two, but not all my favorites. So, since PowerShell is all about finding the right script, here are my top five websites for PowerShell scripts. And, by the way, the right script might be the one that does exactly what you need, or it could be the one that does something very close to what you need, and you can modify.
    These scripts are not in ranked order, in that the first is the _very_ best, instead, I’ll put them down as they come to me.

    1. PowerShell.com This web site not only has scripts, blogs, snippets, but also one of my favorite ways of continuing to learn PowerShell – the PowerShell script of the day. Tobias Weltner has been working with PowerShell for a long time, and this site is rich in resources, including some free pdf books on PowerShell administration and remoting.
    2. Poshcode.org This web site is great in that there are thousands of scripts and people keep on adding scripts and improving them. Joel Bennett and Lee Holmes have both contributed lots of content on this site, to only name two well-known PowerShell gurus.
    3. Codeplex.com This web site has a great SharePoint install script written in PowerShell (AutoSPInstaller.codeplex.com) , that will install everything your farm needs, (except PowerPivot) by using the information you configure in an xml file. Since correctly installing all the components of a SharePoint farm is a serious challenge, this contribution alone makes codeplex stand out. But, wait, it also has sub sites for SQL PowerShell community extensions, and much, much more. HOWEVER, I have found that going to codeplex (www.codeplex.com) and using their ‘Search’ box doesn’t result in as many, or as relevant entries, as using my favorite search engine with the keywords: PowerShell codeplex.
    4. The Microsoft scripting center. Long a resource for administrators, this site not only has PowerShell scripts, but also VB scripts.
    5. Your favorite search engine. Since so far I’ve been catching and preparing your fish for you, now I want to teach you to fish: When using your favorite search engine, type in PowerShell, then the Microsoft product you are interested in, if any, and then the rest of the query you have. For example: PowerShell monitor and send to csv file.

    I hope you find what you’re looking for. You’ll usually save time if you spend a little more time searching for existing scripts, than if you try to build it from scratch. Even if you just break your task into pieces and look for scripts that you can use and patch together, you will often save time.