Create Constraint to Only Have One Row With Certain Value

Say you have table for Employees… in this table it tracks the history of that employee.  It’s a hybrid slowly changing dimension.  So, it looks like this: Employee (EmployeeId, Name, Department, SupervisorId, Obsolete, CreateDate, ObsoleteDate).

EmployeeId Name Department SupervisorId Obsolete CreateDate ObsoleteDate
10 Bob Finance 10 N 1/1/2012 NULL
20 Terry Finance 10 Y 1/1/2012 6/1/2012
20 Terry Marketing 20 Y 6/1/2012 7/1/2012
20 Terry Collections 20 N 7/1/2012 Null

If you want to enforce there only being one row for the current Employee record;  and many rows for the employee record tracking its history.  You might think of using a trigger and rolling it back after you check if there is already an Employee with Obsolete set to ‘N’.  One way I have enforced this in the past… and I happen to think it is a much cleaner and slick solution is using a unique filtered index.  So something like this:

CREATE UNIQUE NONCLUSTERED INDEX [IX_EMPLOYEE_1] ON [dbo].[Employee]
(
[EmployeeId] ASC,
[Obsolete] ASC
)
WHERE ([Obsolete]=’N')

This will cause an insert to fail for an employee who already has a record that is not obsolete.  And if you include certain columns that are heavily used in querying the table for current employee records… the index will help with searches.

VN:F [1.9.18_1163]
Rating: 5.0/10 (2 votes cast)
VN:F [1.9.18_1163]
Rating: +1 (from 1 vote)

Down the Rabbit Hole We Go: Double-Hop Style

Double Hop.  This is when you are connected/authenticated on one server from a client machine.  And in the same process/connection try to use those same credentials to connect to resources on another server (ie. running a linked server query using windows authentication from your client PC).

Using Windows default NTLM authentication does not allow for this type of communication to occur.  With NTLM when a client asks for access… the server sends the client a challenge which includes a hash, and then the client responds with a hash of its own (the users password hashed and hashed server challenge). With this response the server passes this hash (again… a mix of the user pw hash and servers hash)  to the domain controller to verify it is correct.  So the first hop/server only has a hash based off of itself and the user password hash.  For it to authenticate a user on another server it has nothing to pass that server in the way of the current user, since what was passed back is a mix of it’s own and user.  Here is where we get the famous:

Login failed for user ‘(null)’. Reason: Not associated with a trusted SQL Server connection

Kerberos on the other hand doesn’t rely on the users passwords or any hashes.  It relies only on a kerberos ticket that is assigned to the user upon logging into the domain.  This ticket is then passed back and forth to the Ticket Granting Service (on the DC) and verified to be valid.  If valid and if that user who the ticket is assigned to has access to the service.  Then authentication is allowed.

To accomplish the previously discussed double hop scenario of having a linked server use windows authentication, kerberos is the answer.  To do this, the service must have a valid SPN. 

There are two ways to configure an instance of SQL to have a valid SPN.

1.       Auto-register

When a SQL Server service starts it contacts the domain controller using it’s “Log on as” user and tries to register an SPN to it’s self.  To successfully do this, the service account it is trying to register with must at least have “read servicePrincipalName “ and “write servicePrincipalName” AD property set.  If the service is set to run as “local system” then that computer account must contain that privilege.  If a service account is a domain account then that account must have the privilege.  When the SQL service stops, it will unregister that SPN within AD. 

2.       Manually set

SETSPN is a command line utility that is part of the windows server support tools.  To manually set an SPN the utility is used.  The account used to run the SETSPN tool must have rights to change/add an SPN within AD.  This privilege is a validation right and is known as “Validated write to service principle name”.   The command flag to add an SPN is –A.  Below is the format used for setting up a valid SPN for TCP/Kerberos authentication with SQL Server.

SETSPN -A MSSQLSvc/<SQL Server Name>:<port> < service account>

SETSPN -A MSSQLSvc/<Fully Qualified SQL Server Name>:<port> < service account>

There are two commands above because you must set the fully qualified domain name and the server name.  As Kerberos will fail if the client is looking for the FQDN and only the standard server name is registered.  Also of note, if there is a duplicate SPN within AD kerberos will fail.  Meaning if the server is registered under two different AD user accounts Kerberos will fail.  To root these out there is new functionality within the 2008 support tools.  The –X flag will search for duplicates and then you can use the –D flag to delete the erroneous one(s).  The –L flag will list the presence of any registered SPN’s for the specified AD account.  Lastly the –F and –S flags could be used instead of –A if you would like to check and make sure there are already no other SPN’s registered by that name throughout the AD forest. 

Verify Kerberos is being used:

1.  Check error logs on SQL instance.  If it contains something of the nature:

The SQL Server Network Interface library could not register the Service Principal Name (SPN) for the SQL Server service. Error: 0×2098, state: 15. Failure to register an SPN may cause integrated authentication to fall back to NTLM instead of Kerberos. This is an informational message. Further action is only required if Kerberos authentication is required by authentication policies.

Then Kerberos will not work.

2.  Connect to the instance and run query:

select net_transport, auth_scheme from sys.dm_exec_connections where session_id=@@spid;

Column auth_scheme should be “kerberos” not “NTLM”.

With this new knowledge of how to create an SPN.  And how to verify if kerberos is being used.  You must now setup those SPNs and delegation correctly for the right user/computer objects.  Details are below:

Client (ie. Workstation with management studio)
1. Workstation and user must be part of domain or be a trusted domain of where both servers reside.
2. Account is sensitive and cannot be delegated must not be set for user account.
3. Must be able to gain session to sql server InstanceA via sql connection. This must be a Kerberos enabled session.

Middle Server (ie. Server with linked servers specified on it – InstanceA)
1. InstanceA’s service account must have a valid SPN for FQDN and netbios name of InstanceA (detailed above).
2. InstanceA’s service account must be trusted for delegation to linked servers SPN. So, if adding constrained delegation – search for InstanceA’s service account and under its delegation tab… search for InstanceB’s service account and select it’s SPN. Otherwise, if you want to just leave it wide open, just select trusted for delegation.
3. Must be able to gain session with same user account used to connect from client to sql server InstanceB via. This must be a Kerberos enabled session.
4. Create self mapping (“Be made using the login’s current security context”) linked server to InstanceB.

Linked/Target Server (ie. Server your trying to run distributed query against – InstanceB)
1. InstanceB’s service account must have valid SPN for FQDN and netbios name of InstanceB.

VN:F [1.9.18_1163]
Rating: 10.0/10 (2 votes cast)
VN:F [1.9.18_1163]
Rating: +2 (from 2 votes)

Expand and Collapse tablix for SSRS

It’s been awhile… let’s see if I can start this up again. This entry will be about a trick I’ve utilized in several reports for SSRS in the past. It’s so stupid its genius. To make a report more functional and to get everything on one screen that the client wants… is sometimes tough to do. You might have 5 pie charts and then 3 tablixes. A nice solution to having the tablix’s take up the least amount of real estate is to utilize the “toggled by” show hide feature.

To do this, follow the steps below:

1) Drop your tablix on the body of the report.
2) Create your dataset.

3) Right click on your dataset and then Add Calculated field
4) Enter an expression with a value like so =”” or if you want a header for the group put that value in there.

5) Then right click on your detail group down in the row group’s pane. Select add group->parent group.

6) For field to group on use the previously created calculated field
7) Now down in your row groups right click on your new group go to properties.
8) On visibility tab select “hide” and “Display can be toggled by this report item”.

9) The report item to select in the drop down is the header textbox in the newly created group.

10) Type whatever you want in the header cell.. and your done.

VN:F [1.9.18_1163]
Rating: 9.0/10 (1 vote cast)
VN:F [1.9.18_1163]
Rating: +1 (from 1 vote)

UPDATE: Bad File

In regards to my previous post about an erroneous log file entry: http://mattabase.com/archives/295

As I thought I couldn’t get Microsoft to sign-off on altering the system tables. So I created another workaround that gets rid of the file. It involves there own procedures of detach, attach, create file, and remove file… so it had to be supported. I ran it by them anyway just to make sure and they gave me the green light. So I did it and it was successful. Here are the steps:

1. Make sure to attempt to delete bad log file and get error message about not being initialized (does part of cleanup). ie:

ALTER DATABASE [testy] REMOVE FILE [testy_log2]
GO

2. Check with dbcc loginfo whether or not the bad log file is in use

a. If it is then your going to have to switch the database into simple recovery for a second…
b. If the log is not in use then continue…

3. Detach database
4. Reattach and make sure there are only 2 log file entries in sys.sysaltfiles now
5. Create temp datafile with…

ALTER DATABASE [testy] ADD FILE ( NAME = N’testy_temp’, FILENAME = N’C:\testy_temp2.ndf’ , SIZE = 2048KB , FILEGROWTH = 1024KB ) TO FILEGROUP [PRIMARY]
GO

6. Delete both files in like so (in the same order)

ALTER DATABASE [testy] REMOVE FILE [testy_log2]
GO

ALTER DATABASE [testy] REMOVE FILE [testy_temp]
GO

7. Detach database
8. Attach database
9. Check sysaltfiles (should be gone)

If you were unlucky enough to have the bad log file in use then don’t forget to flip it back to full recovery and take a full backup to start a new log chain.

VN:F [1.9.18_1163]
Rating: 10.0/10 (1 vote cast)
VN:F [1.9.18_1163]
Rating: 0 (from 0 votes)

Show Status of Large Index Rebuilds

I googled around for it a bit and wasn’t really able to find anything in the way of how to check the status of a rebuild or create index with drop_existing requests. The way I was able to get some insight was a query of sys.partitions. Something like this:

select * from sys.partitions where object_id = object_id(‘schema.table’) order by index_id, partition_number

I could see what partition it was working on and then how many rows of that partition were done. In our case we ran a create index with drop_existing=on. I would think the behavior would be the same for all rebuilds. Because essentially they do the same thing… Also, if you do not partition the table you’re working on.. then there should only be 2 rows returned with one being the old index and the other being the new. The new rows column will increment as it is filled. For us it looked like this:

rebuildstatus

The partition_number being worked on was 10 and 1,660,865 rows out of 2,064,517 had completed. The next partition to be worked on was 11 and it will have to do 5,212,728 rows to complete… and so on.

VN:F [1.9.18_1163]
Rating: 6.7/10 (3 votes cast)
VN:F [1.9.18_1163]
Rating: 0 (from 0 votes)

Bad File

One weekend I get a call the transaction log is full for our finance database. A long running batch had been running for half the day and filled up the logs because the developer never heard of a commit. Best way to let this thing finish and not have the database crash into hours of recovery would be to create a second log file somewhere we had room. So I did this… Except I made a major boo-boo. I used the GUI and somehow created a datafile instead of a log file. At first glance not a big issue. Cause I can just delete it and give it another go. Did this and it went ok. Then a day later I went to delete the log file after no longer needing it and I get.

One or more files listed in the statement could not be found or could not be initialized.

So I do some research, find it’s a known bug and then call MS support. They tell us it will be fixed in the next major release (Denali). So, the fix is to wait for that or to copy the data into a new database. This database we created the extra log file for is going on a TB and is our major Finance system. So, that much downtime isn’t going to happen for an extra log file. An upgrade to Denali when it finally comes out will at least be a year or to after that.

This bugged me. I now had this log file out there we no longer needed on a LUN with other databases. Sure we could take the db down, then copy the file over to the designated log LUN with the other log. But even this made me mad and would be time consuming. I should be able to delete the darn thing. So I went on this quest to figure out what was going on.

First, create a test database (in my case called ‘testy’). Then, run the following…

select * from sys.sysaltfiles where name like 'testy%'
ALTER DATABASE [testy] ADD FILE ( NAME = N'testy_log2', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\testy_log2.ldf' , SIZE = 2048KB , FILEGROWTH = 1024KB ) TO FILEGROUP [PRIMARY]
GO
select * from sys.sysaltfiles where name like 'testy%'
ALTER DATABASE [testy] REMOVE FILE [testy_log2]
GO
select * from sys.sysaltfiles where name like 'testy%'
ALTER DATABASE [testy] ADD LOG FILE ( NAME = N'testy_log2', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\testy_log2.ldf' , SIZE = 1024KB , FILEGROWTH = 10%)
GO
select * from sys.sysaltfiles where name like 'testy%'
ALTER DATABASE [testy] REMOVE FILE [testy_log2]
GO
select * from sys.sysaltfiles where name like 'testy%'

Notice the results…Looks good right?

(2 row(s) affected)
(3 row(s) affected)
The file ‘testy_log2′ has been removed.
(2 row(s) affected)
(3 row(s) affected)
The file ‘testy_log2′ has been removed.
(2 row(s) affected)

badfile1

Now take a log backup.. which will require a full cause you haven’t taken one yet. Also, because of the way sql server handles knowing what files have or have not been part of a backup… you’ll need to run a log backup after you create the first file. Otherwise you’ll get an error stating you cannot create file with the same logical name until log backup has completed. So, run the script below to create,drop,backup,create,drop…

BACKUP DATABASE testy TO DISK = 'c:\testy.bak'
GO
BACKUP LOG testy TO DISK = 'c:\test.bak'
GO
select * from sys.sysaltfiles where name like 'testy%'
ALTER DATABASE [testy] ADD FILE ( NAME = N'testy_log2', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\testy_log2.ldf' , SIZE = 2048KB , FILEGROWTH = 1024KB ) TO FILEGROUP [PRIMARY]
GO
select * from sys.sysaltfiles where name like 'testy%'
ALTER DATABASE [testy] REMOVE FILE [testy_log2]
GO
BACKUP LOG testy TO DISK = 'c:\test.bak'
GO
select * from sys.sysaltfiles where name like 'testy%'
ALTER DATABASE [testy] ADD LOG FILE ( NAME = N'testy_log2', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\testy_log2.ldf' , SIZE = 1024KB , FILEGROWTH = 10%)
GO
select * from sys.sysaltfiles where name like 'testy%'
ALTER DATABASE [testy] REMOVE FILE [testy_log2]
GO
select * from sys.sysaltfiles where name like 'testy%'

Results…

Processed 160 pages for database ‘testy’, file ‘testy’ on file 5.
Processed 2 pages for database ‘testy’, file ‘testy_log’ on file 5.
BACKUP DATABASE successfully processed 162 pages in 0.407 seconds (3.102 MB/sec).
Processed 3 pages for database ‘testy’, file ‘testy_log’ on file 11.
BACKUP LOG successfully processed 3 pages in 0.063 seconds (0.263 MB/sec).
(2 row(s) affected)
(3 row(s) affected)
The file ‘testy_log2′ has been removed.
Processed 5 pages for database ‘testy’, file ‘testy_log’ on file 12.
BACKUP LOG successfully processed 5 pages in 0.227 seconds (0.159 MB/sec).
(3 row(s) affected)
(4 row(s) affected)
Msg 5009, Level 16, State 9, Line 4
One or more files listed in the statement could not be found or could not be initialized.
(4 row(s) affected)

badfile2

What’s going on? I thought that got deleted? Why are they both in sysaltfiles? I’m confused… Why aren’t things getting deleted from the system tables. If you inspect sysaltfiles closley you’ll see the original mistakingly created data file still exists. So now there is a duplicate entry in sysaltfiles for the logical name ‘testy_log2′. Yup definite bug.

How to fix it? The next steps need to be done in the DAC. It involves changing system tables. This isn’t supported unless MS tells you to do so. Which they haven’t told me to. So we haven’t done this in our financials database. I do not suggest doing this unless you contact MS first.

Steps:
1. Backup log and check to make sure nothing is active in the log you want to delete (dbcc loginfo anything with a status not 0 is active).
2. Stop instance
3. Start instance in single user mode and connect to DAC
4. From DAC connection run the following

delete from sys.sysfiles1 where name = 'testy_log2'
delete from sys.sysprufiles where lname = 'testy_log2'
delete from sys.master_files$ where name = 'testy_log2'

5. Stop instance
6. Start instance regularly

If all went well you should be able to delete the log from the file system and it should have disappeared from sysaltfiles view. How’d I figure this out. While connected to DAC I ran a couple queries… First I had to figure out what tables contained file data.

select * from sys.syscolpars order by name
select * from sys.objects where type = 'S' 
sp_helptext 'sys.sysaltfiles'

With these three queries I was able to find anything file related. I figured anything that was file related would have a column called fileid. So I look in syscolpars for that and got the table object Id. Then looked for that table with the id in objects. Lastly, and most importantly. The main table that supplies data for the sysaltfiles. I needed to look at the text of the view. Which is querying the master_files$ table. If you do not delete the rows from this table… it will repopulate the other two tables with the erroneous entries again at startup.

Here is what the entries looked like for the bad testy database.

badfile3

Once I deleted the rows corresponding to testy_log2 and started the instance back up. Everything was good again. I haven’t noticed any issues with this database since. I ran a couple long batches to fill up the log… backed it up a few time… etc. And it seems fine. But again. I would not do this unless MS says it’s ok. Which I plan on asking. Let you know how that goes.

NOTE: This bug has been reported in 2005, 2008 and 2008 R2. I was running 2008 SP2 for this test.

VN:F [1.9.18_1163]
Rating: 9.5/10 (2 votes cast)
VN:F [1.9.18_1163]
Rating: 0 (from 0 votes)

Try… Catch… Try Again

I remember hearing about the new feature of the try and catch for TSQL in 2005. I was very excited to use it, as I was aware from other languages how much it helps with the logic and readability of your code. To my dismay, it wasn’t all that I had hoped for. I hoped it would help with trapping errors and returning errors messages that were useful. In all cases you can trap an error. It’s the returning of a valuable error message that still is missing. Because certain pieces of TSQL are out of our coding control (ie. BACKUP DATABASE, ALTER INDEX, etc.). It’s not up to us how the error is passed. A prime example of this is with the backup command. Any time an error occurs with a backup command two critical errors are passed back. The catch is only able to catch one error message. And the SQL engineers thought it was wise to catch the last error. An example of this would like the following:

Msg 3201, Level 16, State 1, Line 3
Cannot open backup device ‘C:\SQLBU\newscripts\FULL_AdventureWorks_20101014121506.BAK’. Operating system error 3(The system cannot find the path specified.).
Msg 3013, Level 16, State 1, Line 3
BACKUP DATABASE is terminating abnormally.

But if caught in a try-catch, the error_message() is the very unhelpful ‘BACKUP DATABASE is terminating abnormally’. This is very troublesome for someone who wants to create the normal backup everything on the instance script. Sure backups for the other databases would continue… but your stuck trying to figure out what might have went wrong. And when you finally get to looking on the server, the condition that made it fail may no longer exist and your left scratching your head.

I see many people complaining about this very example. Because it’s pretty common place to want to backup an entire instance with TSQL and then report any errors via email, output file, or whatever. To get around this I’ve added a couple things to help me troubleshoot backup failures in my backup scripts (which I also broke apart to be more modular now… see attached zip for all my code). I now start a server side trace and save it to a file on the server everytime a backup is running. I filter on the spid of the currently executing process and a couple other things to keep the size down. And also delete the previous .trc files if they exist. Now, if I recieve notification a backup failed. I can go out to wherever I am storing the .trc files and open them up in profiler. Inside will be the better more descriptive first error. Here’s the import snippets of code:

start trace if debug set
	select @TraceFile=@TraceDirectory + N'backup' + @Type + [dba].dbo.fn_DateLabel()
	BEGIN TRY
		IF @debug = 1 
		BEGIN
			exec DBA.dbo.usp_DeleteTraceFiles @TraceDirectory = @TraceDirectory, @Type = @Type, @DaysToKeep = @DaysToKeep
			exec DBA.dbo.usp_StartBackupTrace @FileName=@TraceFile,@SPID=@@SPID, @TraceID=@TraceIDOut OUTPUT
		END
	END TRY
	BEGIN CATCH
		print 'Could not start trace... see output above for return code.'
	END CATCH

So call a proc to start the trace… and then later on in the script when backups are done:

--STOP AND DELETE TRACE DEFINITION
if @TraceIDOut > 0 
BEGIN
	exec DBA.dbo.usp_StopBackupTrace @TraceIDOut
END

Call another proc to stop it… In the case above, where the location is not valid. You’ll see something like this in the trace file now when looking on the server.

profiler_error

If you want my code to use as a reference or just use it as is… backupscripts.zip

NOTE: We store all our maintenance scripts in a database called DBA on the server. So if you want to use it as is you’ll need to take all references to DBA out of the scripts. Unless you follow suit and create a database called DBA and put these scripts in it. In the zip file are 12 scripts that will create 11 new objects and 1 table type. They are:

  • ErrorTableType: a table type so we can pass errors to our error notification handler, usp_ProcessErrors
  • fn_CommaDelimStringToTable: a function to pass in a comma delimited string and recieve a table back
  • fn_DateLabel: a function to recieve a current time stamp back as a string
  • fn_DefaultBackupDirectory: a function to recieve the location to put backups… keys off of @@servername
  • fn_DefaultTraceDirectory: a function to recieve the location to put trace files… keys off of @@servername
  • usp_Backup: stored proc that is used to initiate instance backup calls
  • usp_BackupDiff: stored proc called by usp_Backup to run a differential backup for a database
  • usp_BackupFull: stored proc called by usp_Backup to run a full backup for a database
  • usp_BackupLog: stored proc called by usp_Backup to run a log backup for a database
  • usp_CmdShellSwitch: stored proc called to enable or disable xp_cmdshell switch (needed to delete files)
  • usp_DeleteBackupFiles: stored proc called to delete backup files in default folder of a particular type
  • usp_DeleteTraceFiles: stored proc called to delete trace files in default folder of a particular type
  • usp_StartBackupTrace: stored proc to start server side trace with minimal events used to keep size down
  • usp_StopBackupTrace: stored proc to stop a server side trace and delete it’s definition from the server

Last note is these scripts are designed for SQL Server 2008.

VN:F [1.9.18_1163]
Rating: 9.0/10 (1 vote cast)
VN:F [1.9.18_1163]
Rating: 0 (from 0 votes)

Peoplesoft and Resource Governor

During implementation of our new PS finance environment I was given the task to explore the use of SQL 2008′s Resource Governor capabilities. We wanted, if possible, to categorize such transactions as; batch, psquery, administrative, and other. This was to ensure someone running a psquery, batch, or administrative transaction couldn’t take over the whole server.

Well I’m new to Peoplesoft and thought I’d give it a go. So first thing I had to do was understand how PS logs into SQL Server. That wasn’t so hard. In short, PS uses connection pooling and comes in as a user with access to it’s database. This user has dbo rights. They’re are a finite amount of connections (worker processes) made at startup for normal processing. Then depending on how the PS Admins set things up, a connection can be made on the fly or it can grab an existing connection. From here the first thing it does is query some PS tables to see if the PS login has rights to what it needs. But again, it does all of this with just a single sql login account. What it does do, is update the context_info field that resides in sys.dm_exec_requests. This field can be helpful when things are bogged down and the dba wants to know who is executing what within sql. The DBA could run something like:

select session_id, cast(context_info as varchar(max)) OPRID, qt.text, DB_NAME(database_id) as dbname ,wait_type,wait_time,wait_resource, blocking_session_id from sys.dm_exec_requests cross apply sys.dm_exec_sql_text(sql_handle) qt ; GO

It shows some info specific to the to the PS user running the query and then the actual query it’s running.

That’s good to know, but resource governor doesn’t work that way. It doesn’t look at the context of a running transaction. It looks at things at the time the connection is made. I wasn’t going to give up quite yet though. I knew it would be easy to classify most batch processing as that would come from the PS Process Scheduling server. Not the application server that does the normal day-to-day processing. The tough one becomes ps query. This as far as I understand it… at startup on the application server a configured amount of processes start for handling psquery. You can see the process(es) under task manager. And even in the startup logs. So my thought was, if I could somehow get the pid of the psquery process. I’d know at connection by using the hostprocess column in sys.sysprocesses if it were requested from psquery or not. Currently the only method I can think of doing this, and it wouldn’t be fun… is to somehow get the pid from the logs at startup and update a table somewhere on the sql instance. Then in my classifier function I could query that table to see if the pid matches hostprocess. That just sounded like too many loops and too much work to get ps queries into a designated bucket. So for now I scrapped it. Lastly, all administrative tasks that are scheduled will run a specific user. So in our case you can just classify on that user. There are many ways you can go for that one though, all of which have discussed at nausium.

The one that got away was the psquery stuff… so if anybody has an idea. I’m open to suggestions.

VN:F [1.9.18_1163]
Rating: 8.0/10 (1 vote cast)
VN:F [1.9.18_1163]
Rating: +1 (from 1 vote)

Fantasy Football Draft Access Database 2010

Finally got to updating the access database that we used last year to this years players… If you’re new to this, here is the previous blog entry with the 2009 version, http://mattabase.com/archives/50. I didn’t really make any changes to this years version except I added this years players and top 300. Here you go…

FantasyFootball2010.zip

VN:F [1.9.18_1163]
Rating: 7.0/10 (3 votes cast)
VN:F [1.9.18_1163]
Rating: +4 (from 4 votes)

Powershell SQL Server Backup History Script

The script below can be used/scheduled to email a report to someone with the database backup history for a list of SQL instances. So in a text file you’ll need to place the instance name line by line and then change the first line of the script to point to that file. Then simply change the emailFrom, emailTo, and smtpServer to correct values for your environment and your good to go. As long as you sent it to yourself, you should get one html table back with servername, databasename, last_db_backup, and backup age (hours) for all of the instances you specified in the text file.

# Get list of SQL servers
$sqlservers = Get-Content "C:\sqlservers.txt";
# Load SMO extension
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo") | out-null;
# Get the datetime to use for the log filename
$datetime = Get-Date -Format "yyyy-MM-ddThh-mm-ssZ";
$emailFrom = "whoever@wherever.com";
$emailTo =  "whoever@wherever.com";
$smtpServer = "smptserver.wherever.com"; 

Write-Host "Pulled in " $sqlservers.length " sql servers";

#sql statement for backup history
$sql = "SELECT ";
$sql = $sql + "CONVERT(CHAR(100), SERVERPROPERTY('Servername')) AS server, ";
$sql = $sql + "msdb.dbo.backupset.database_name as dbname, ";
$sql = $sql + "MAX(msdb.dbo.backupset.backup_finish_date) AS last_db_backup_date, ";
$sql = $sql + "DATEDIFF(hh, MAX(msdb.dbo.backupset.backup_finish_date), GETDATE()) AS [backup age (hrs)] ";
$sql = $sql + "FROM    msdb.dbo.backupset ";
$sql = $sql + "WHERE     msdb.dbo.backupset.type in  ('D', 'I') ";
$sql = $sql + "and msdb.dbo.backupset.database_name in (select name from master.dbo.sysdatabases) ";
$sql = $sql + "GROUP BY msdb.dbo.backupset.database_name ";
$sql = $sql + "HAVING (MAX(msdb.dbo.backupset.backup_finish_date) < DATEADD(hh, - 24, GETDATE())) ";
$sql = $sql + "UNION ";
$sql = $sql + "SELECT ";
$sql = $sql + "CONVERT(CHAR(100), SERVERPROPERTY('Servername')) AS server, ";
$sql = $sql + "master.dbo.sysdatabases.name AS dbname, ";
$sql = $sql + "NULL AS [last_db_backup_date], ";
$sql = $sql + "-1 AS [backup age (hrs)]  ";
$sql = $sql + "FROM master.dbo.sysdatabases LEFT JOIN msdb.dbo.backupset ";
$sql = $sql + "ON master.dbo.sysdatabases.name  = msdb.dbo.backupset.database_name  ";
$sql = $sql + "WHERE msdb.dbo.backupset.database_name IS NULL AND master.dbo.sysdatabases.name <> 'tempdb' ";
$sql = $sql + "ORDER BY msdb.dbo.backupset.database_name";

Write-Host "Will execute... $sql";

$dtmaster = New-Object "System.Data.DataTable";
$commaservers = "";
$i = 0;

# For each SQL server listed in $sqlservers
foreach($sqlserver in $sqlservers)
{
    $i = $i + 1;
    Write-Host "Processing sql server: $sqlserver.";

    if ($i -lt $sqlservers.length)
    {
        $commaservers = $commaservers + $sqlserver + ", "
    }
    else
    {
        $commaservers = $commaservers + " and " + $sqlserver
    }

	# Create an instance of SMO.Server for the current sql server
	$srv = New-Object "Microsoft.SqlServer.Management.Smo.Server" $sqlserver;
    $db = New-Object "Microsoft.SqlServer.Management.SMO.Database";
    $db = $srv.Databases["master"];
    $ds = New-Object "System.Data.DataSet";
    $ds = $db.ExecuteWithResults($sql);

    Write-Host "Has rows: " $ds.Tables[0].Rows.Count;

    if ($ds.Tables[0].Rows.Count -ne 0)
    {
        $dtmaster.merge($ds.Tables[0]);
        Write-Host "DTmaster rows: " $dtmaster.Rows.Count;
    }
    Remove-Variable ds;
}

$dtmaster.Rows | format-table -AutoSize;

[string]$body = $dtmaster.Rows | select * -ExcludeProperty RowError, RowState, HasErrors, Name, Table, ItemArray | ConvertTo-Html -Head $style -Body "

DB BACKUP REPORT

This report contains a list of databases who have exceeded 24hrs since their last good diff or full backup... It reports on servers: $commaservers" ; $subject = "DB Backup Report"; $message = New-Object Net.Mail.MailMessage($emailFrom, $emailTo, $subject, $body) $message.IsBodyHTML = $true $smtp = new-object Net.Mail.SmtpClient($smtpServer); $smtp.Send($message); Remove-Variable subject; Remove-Variable message; Remove-Variable smtpServer; Remove-Variable emailTo; Remove-Variable emailFrom; Remove-Variable body; Remove-Variable sqlservers; Remove-Variable sql; Remove-Variable srv; Remove-Variable db; Remove-Variable dtmaster;
VN:F [1.9.18_1163]
Rating: 9.0/10 (1 vote cast)
VN:F [1.9.18_1163]
Rating: 0 (from 0 votes)