Solar Energy

I live in a sunny city and solar technology has come a long way. So sunny and such a long way that we can easily fit the panels to generate enough electricity to power the entire house. Then, after about 7 years, when it is “paid off” from not paying the electric company, it’ll be a source of passive income. The panels have a 25 year warranty and the other parts have a 10 year warranty. The plan is to have the installation take place at the end of the year so I have time to save up the money. Here in California we can sell back our unused electricity via net metering and take what we need later in the day after the sun is down. I am looking forward to divorcing my household from nuclear and fossil fuel use.

Posted in Other Things | Leave a comment

Renaming a Cluster Resource

We recently moved the server room from in house to a data center. While power, air, security and seismic capabilities were good reasons, it had to happen because the entire company is moving out of the building! Anyway, while the VMs were not a big deal – other than bandwidth – the physical sql servers required attention beforehand for installation, configuration and mirroring set up to keep the downtime to a minimum. One of the tasks on cut-over night was renaming 2 of the sql cluster instances back to what the applications expect. It turned out to be a simple task, less steps than renaming a standalone. This msdn article has all you need. I was surprised the local server in sysservers was updated as part of the process.

Posted in Configuration | Leave a comment

Buy and Hold is not Buy and Forget

People generally like things. And thinking their things help. Is this not true? It applies to IT too, specifically regarding applications. Doesn’t more=better? If you have to configure and support all kinds of packet sniffing, email PHI inspecting, password hacker catching software and keep some eyes on it, well, you are being asked a lot! Sadly, we’ve all seen where the alerting software gets “installed” and then ignored until the admins are asked to posthumously explain what transpired. Not good. We need to be sure we include in our planning time to keep up with our new app. We are not “buy and forget” investors, right?

The thing to do if you find yourself or your group in this predicament is make a list of all the software, the in house primary and backup, how good is the vendor support, and the overall in house level of expertise. Figure out what it would take time and money wise to get in house people up to speed and include that too. Take the presentation to the next level up and get a commitment to fill the gaps for critical technologies.

Posted in Leadership | 2 Comments

Using SQL To Configure Database Mirroring

We use database mirroring for replicating data from physical sql servers to the disaster recovery location; SRM is used for virtual machines. I don’t like the point and click interface though. So many screens for a pretty simple series of tasks. If you take the script from here and modify it to also create the following ALTER  statements then mirroring essentially sets up itself. As you can tell, we don’t use a witness server, preferring to manually fail over. When there are a lot of databases to configure this will save you from carpel tunnel let me tell you.

— execute on primary
if not exists (select * from sys.tcp_endpoints where name=’mirroring’)

— execute on mirror
if not exists (select * from sys.tcp_endpoints where name=’mirroring’)

— execute on mirror
ALTER DATABASE <<db name>>
SET PARTNER = ‘TCP://<<primary server name>>:5022’

— execute on primary
ALTER DATABASE <<db name>>
SET PARTNER = ‘TCP://<<mirror server name>>:5022’

— we use asynchronous / high performance

Here are some other commands which are helpful when you have to work with many databases at once:

— to stop mirroring, run on principal or mirror
alter database [<<db name>>] set partner off
select ‘alter database [‘+db_name(database_id)+’] set partner off’ from sys.database_mirroring where mirroring_guid is not null order by 1

— restore the mirrored database after mirroring is stopped
restore database [<<db name>>] with recovery
select ‘restore database [‘+db_name(database_id)+’] with recovery’ from sys.database_mirroring where mirroring_guid is not null order by 1

— make mirroring asynchronous
alter database [<<db name>>] set SAFETY OFF
select ‘alter database [‘+db_name(database_id)+’] set safety off’ from sys.database_mirroring where mirroring_guid is not null order by 1

— make mirroring synchronous (required to fail over)
alter database [<<db name>>] set SAFETY On
select ‘alter database [‘+db_name(database_id)+’] set safety full’ from sys.database_mirroring where mirroring_guid is not null order by 1

— manuall failover (must be run on principal)
alter database [<<db name>>] set partner failover
select ‘alter database [‘+db_name(database_id)+’] set partner failover’ from sys.database_mirroring where mirroring_guid is not null order by 1

Posted in Configuration, DBA, SQL | Leave a comment

The Road to Awareness (aka Event Notifications)

Getting notified of what is going on is critical to meet SLAs. Plus, who wants to be the second to know about something going on with a production sql server? No thanks.

SQL Alerts serve their purpose. The cache hit ratio and login failures are great. The disk IO and getting alerted to higher severity errors are also important indicators. Of course the alerts can’t do everything.

While having sql which runs regularly to check on this or that is important, there’d be an awful lot of jobs needed to check on everything. Currently for me this is: what is filling up tempdb; database mirroring which is not synchronized or synchronizing; and free space on the sql server drives.

Back in the day I used MS Access and SQL DMO to make a morning check thing of sorts. There were no Central Management Servers back then either so I also added the capability for deployments and logging the outcome. At this particular employer there were many sql servers around the world all with bidirectional transactional replication. Schema changes were no joke.

Then came powershell and another dba and I wrote (and copied from the interwebs) the code to duplicate the checking functionally which outputted to a spreadsheet. Truthfully, while I was glad to get rid of Access, the spreadsheet was a pain as there was no way to store history in a way that could be queried. Seeing patterns is a very important. We need data though before a pattern can be seen!

Then I wrote the sql to replace the powershell. Also at this time database triggers came to be and they allowed for object modification history. While this was ok, the biggest risk – having to wait until the next day – was still present. I simply like to know when files are growing.

Happily now we have Event Notifications. Really nice. The biggest hurdle was writing the stored procedure which would be triggered as the MS documentation was not a ton of help. Here is mine. It is taken from several articles, most notable this one and this one. Note that the procedure expects other objects; use this script to create the service broker objects. There are also some tables in the procedure for adding email recipients for specific databases and logging the events themselves. This will make the tables. The procedure assumes you have a default, public database mail profile.

Posted in DBA, SQL | 2 Comments

Setting Up for Success

A good dba is ready for problems. A key part of being ready is making information available and no joke I like code which broadcasts what is going on. Using raiserror with nowait and 0 or 10 severity is great. Using a stored procedure template which logs parameters, run times, errors and any other juicy information collected during the execution to a table is great. Job output files are great. Emails and texts from sql server are great. Event notifications are great. Job completion messages are great. Yippee!!!

Something I’ve never yet used are continuously running traces. I’ve set up C2 auditing and of course used traces for detecting a specific login, performance tuning and troubleshooting work. I understand this technique is popular with dba services. I wonder when the time will come for it being needed where I work and how I will recognize the situation?

Posted in DBA | 2 Comments

In The Beginning

Like everyone with taste, I love Dune. I’ve read it countless times and I am not exaggerating. Dune, like all literature, conveys philosophy; for example, one well known quote from Dune is: “A beginning is the time for taking the most delicate care that the balances are correct.”

This is super important when it comes to new staff. How departments treat the newcomer is great maturity metric.

This area was a big gap in the department, so I went to work! We now have an official dba new hire meeting, where we discuss:

  1. The names and purposes of all the sql servers. I set up a Central Management Server for this so that the department has the same information as the database admins.
  2. The names of the reporting servers. I had the system guys create sub-domains for the dev, QA and production SSRS installs.
  3. Who are the leads for the different applications
  4. How to get code into production
  5. How we do deployments
  6. Tidal Scheduler
  7. Stored procedure and DML templates
  8. Our ticketing system and severity definition
  9. Our general security model

In my experience, all of this is best discussed with a dba because it helps to create a relationship which must include plenty of discussion for the department to be successful.

In the initial meeting I also discuss common shares, such as for

  1. SSIS packages and config files
  2. SSRS log files
  3. Dev, QA, production and ad hoc backups
  4. The location for job output files

I am not big on documentation to be frank; we evolved to communicate verbally, face to face. Then there is the often repeated joke of how documentation is out of date essentially as soon as it is written. Of course that is not totally true and there are certainly some things which benefit from being written out. The items above are all in a document and the url to that gets sent to the attendees following the meeting for their convenience. Because it is so important, I’ll say it again: the meeting is primarily a way to set us up for quality communication.

Posted in DBA, Leadership, Metrics | Leave a comment


The criteria for being a good leader, lots written about that! A good leader enables the group to make a decision thus gaining the insights of many. People won’t speak unless they feel safe and supported. Ideas need to be properly considered and thought through. Rather than being right we must look at finding and executing the best solution as the proper goal.

My favorite metric is a manager’s staff turnover. At a company with a pension or options, the date people vest shouldn’t coincide with their 2 week notice.

Posted in DBA, Leadership, Metrics | 2 Comments

Tier 1, 2 and 3

One day it hit me how unusual it was that senior level people were receiving calls from end users as part of the call rotation. (Happily for us, the dbas were not in the list.)

True story: my very first job in the computer field was first level support. I much prefer face to face discussions and, for a variety of reasons which are all funny now, don’t miss that job at all. It did expose me to MS Access which led to SQL Server though.

Anyway, it did not seem like the best use of time for what were clearly tier 3 people helping end users with drive mapping issues. Furthermore, the problems tier 3 people are resolving and projects of which we are a part require focus and minimal interruptions to be completed successfully and on time.

I am glad to be able to say, though not as glad as the tier 3 systems guys let me tell you, that we now have a proper triage system in place and no calls or tickets go directly to my group without prior discussion.

Posted in Leadership | Leave a comment

The Code Review

At work, some years ago, I was able to implement a dba code review and practice deployment requirement for every Change Request. (A Change Request documents the proper steps have been taken so that a deployment has a good chance of success.) This was tough because my department loves project concurrency, added a ton of work for the dba team, and resulted in several conversations with developers who were pretty excited after hearing their sql required changes to comply with the published standards. Wow.

To help with the process, the I gave presentations and published the best practices standards document.The code review, in short, is to make sure obvious DDL such as foreign keys are present; comments exist; and performance has been given more than lip service.

Since the inception, I am glad to see that the developers, on their own, have instituted a peer code review. It has also been great to see how the developers have incorporated from our discussions the ideas and techniques into their sql.

What’s next?

We don’t have a great system for performance reviews and I’ve had a very difficult time getting attention to slow running sql. The management hasn’t been super excited and that is, frankly, a big hurdle.

Another issue is that only changes or new sql is legally review-able. I ask developers, “Would you want your next employer to know this is your sql?” because I personally believe if something is touched then the entire object should be brought up to the new standard.

Change is possible, right?

Posted in DBA, SQL | 2 Comments