Small Changes

Somewhere I read that, eventually, two lines which start from the same point in the same direction and are a fraction of a degree from parallel will be infinitely apart.

Another one I like is the equation

Patience + Persistence = Improvement

Once I started thinking about my own goals from the perspective of not needing to make huge changes I ended up improving more than I ever had.

Posted in Getting Unstuck | Leave a comment

Converting Tidal File Events to Control-M File Watchers

We are converting from Tidal Scheduler to Control-M. One of the hurdles is how to create all the necessary files for the File Watchers to work. This worked for us.

SET NOCOUNT ON

USE dba

–DEALLOCATE c
IF OBJECT_ID(‘usp_TidalFileEventExport’) IS NULL
EXEC(‘CREATE PROC usp_TidalFileEventExport AS SELECT 1’)

DECLARE
@trgmst_id INT,
@trgmst_name VARCHAR(80),
@sql VARCHAR(8000)
DECLARE c CURSOR FAST_FORWARD FOR
SELECT trgmst_id FROM Admiral..trgmst WHERE trgmst_type=4 — file events
OPEN c
WHILE 1=1
BEGIN
FETCH NEXT FROM c INTO @trgmst_id

IF @@FETCH_STATUS<>0
BREAK

SELECT @sql=
‘INTERVAL <5> # Sleep interval between file search (seconds)
MIN_SIZE 0Kilo # Minimum file size
MIN_DETECT <3> # Number of times Control-M checks file size which have to all match for it to consider the size is static

ON_FILEWATCH ‘+REPLACE(REPLACE(trgmst_info.value(‘(evtmdef/SOURCE)[1]’,’varchar(4000)’),'<<prod file server>>’,'<<dev file server’),’PROD’,’DEV’)+’\’+
trgmst_info.value(‘(evtmdef/FILEMASK)[1]’,’varchar(4000)’)+’ CREATE
THEN
DO_CMD move %FILENAME% ‘+REPLACE(REPLACE(trgmst_info.value(‘(evtmdef/TARGET)[1]’,’varchar(4000)’),’prod file server’,’dev file server’),’PROD’,’DEV’)+’
END_ON’,
@trgmst_name=trgmst_name

FROM (
SELECT CAST(trgmst_info AS XML) AS trgmst_info,
trgmst_name
FROM Admiral..trgmst
WHERE trgmst_type=4 — file events
AND trgmst_id=@trgmst_id
) a

SET @sql=’alter proc usp_TidalFileEventExport as select ”’+@sql+””
EXEC(@sql)

SET @sql=’bcp “exec dba..usp_TidalFileEventExport” queryout “d:\sqlserver\’+@trgmst_name+’_FW.txt” -c -S -T’
EXEC master..xp_cmdshell @sql

END
CLOSE c
DEALLOCATE c
DROP PROC usp_TidalFileEventExport

Posted in Control-M, Tidal Enterprise Scheduler | Leave a comment

Recovery Models and Backups Re-Found

This is a short version of what got me to understand recovery models and the what happens during a backup. Wow Kimberly Tripp is a great asset.

Back in the day I downloaded it as an individual chapter. I’ve forwarded it so many times its not funny. You can get it here.

Posted in Day to day, DBA | Leave a comment

Requesting a SQL Configuration Change Requires Evidence

Part of a DBA’s job is making sure everyone is comfortable with proposed changes. I recently created a Change Request to increase the SQL setting Cost Threshold for Parallelism. Here is the justification I provided to get sign-off from the responsible manager:

  1. A parallel plan is able to use more CPU, meaning memory and/or disk are the next potential bottleneck(s)
  2. We have an excessive ratio for the wait type we see (called CXPACKET) when there are too many parallel plans for the disk subsystem per the documentation provided by the DBA Team. This results in SQL Server waiting for disks while tying up CPU time. In other words, the disks are currently the performance bottleneck. Time was requested for the research and write up due to our ad hoc experience of seeing queries with the CXPACKET wait type.
  3. The query processor assigns a numerical cost to each query and, based whether this is over or under the cost threshold for parallelism setting, will choose to execute a non-parallel or parallel plan.
  4. By raising the threshold we are reducing the potential number of parallel plans which [should] free up CPU time for other queries and configuring SQL Server to match the underlying hardware. The potentials & shoulds are because we may have so many high cost plans that we’ll need to further increase the number until we see a drop in CXPACKET waits.
  5. When we first installed <<app 1>> and also on <<SQL instance 1>> in general we had query issues due to this wait type. The quick fix was to set the max degree of parallelism to 1. This setting determines home many CPUs SQL can use: by setting to 1 we effectively set the cost threshold to an infinite number ensuring we would have no parallel plans. Thus we have already tested this at a higher setting in Production.
  6. This is a very old setting and the default has not changed. As you know, computer hardware has evolved greatly. The cost threshold default set during installation has not evolved with the hardware.
  7. We’ve run this setting in the Dev, SIT and UAT SQL Instances for over a month with no issues.
Posted in Configuration, DBA | Leave a comment

Great post on Compression Options

We are in the process of implementing Columnstore Indexes. I came across this page while researching and shared it with the DBA Team.

http://aboutsqlserver.com/tag/clustered-columnstore-indexes/

Posted in Configuration, DBA, SQL | Leave a comment

The Roadmap

We are working on the Roadmap again. It is a great exercise to plan the next 1-5 year’s goals and tie them back to department and company goals.

I want to at last get select rights to production a thing of the past. This will involve several departments and changing how we manage our data being that we rely on analyst’s select rights for data quality issues. A lofty goal!

Posted in DBA, Leadership | Leave a comment

Using SSMS as a Front End

I was asked to explain my reservations with using SSMS as a front end. 

The first issue is concerning our data quality: why are we letting bad data into a database at all? Speak with the data  warehouse group to get a straightforward perspective into the level of workarounds required because of what we let into our databases. Data quality is a measurable Six Sigma metric which I discussed in the Data Integrity class I gave a few months ago. See http://docs.oracle.com/cd/B31080_01/doc/owb.102/b28223/concept_data_quality.htm#BABGGFGC 

The second issue is what constitutes a safe way to modify data: do we sincerely believe SSMS is a good application front end? Most data “corruption” is because regular people make a simple mistake in their sql. Like data quality, we can also use how often ad hoc updates are run in production as maturity indicator. Imagine someone running ad hoc sql which updates your bank account…. That is not good!

Posted in DBA, Metrics | Leave a comment

The Biggest Variable

Which is more important, the chariot or the horse? Of course the question assumes it is possible to separate two things which may be inseparable. Is the question reasonable? Better is to strive to be the best and acknowledge the value and importance of one’s dependencies.

Posted in Day to day, DBA, Employment | Leave a comment

SQL 2014, BIDS and SQL Server Data Tools

With SQL 2014 the tools used by developers are not part of the installation media and in fact SSMS is not even necessary for some. Be prepared to discuss with the development group what are the options before any SQL installations take place to avoid unnecessary delays. In my group we dedicated a DBA for this project to help have consistent communication.

Posted in Configuration, DBA, Leadership | Leave a comment

Querying Tidal for Job Status

The need to look to the future for what is going to run and easily make a report does not appear possible through the Tidal Scheduler interface. Here is what I wrote to meet this need:

use Admiral

select — top 100
case conn.nodmst_type
when 1 then ‘Master’
when 6 then ‘Windows’
when 11 then ‘SQL’
else ‘Type not yet decyphered’
end as agent_type,
conn.nodmst_name as agent_name,
jm.jobmst_name,
case jr.jobrun_status
when 0 then ‘Scheduled’
when 1 then ‘Waiting On Dependencies’
when 2 then ‘Waiting On Operator’
when 3 then ‘Held’
when 5 then ‘Timed Out For Day’
when 7 then ‘Agent Unavailable’
when 8 then ‘Agent Disabled’
when 10 then ‘Waiting on Group’
when 11 then ‘Waiting On Children’
when 49 then ‘Waiting On Resource’
when 50 then ‘Launched’
when 51 then ‘Active’
when 52 then ‘Stopped’
when 53 then ‘Deferred’
when 66 then ‘Error Occurred’
— Am not joining back to underlying table because 97 shows up incorrectly as Completed Normally
when 97 then ‘Completed Normally (Gathering Output)’
when 98 then ‘Completed Abnormally’
when 99 then ‘Externally Defined’
when 101 then ‘Completed Normally’
when 103 then ‘Completed Abnormally’
when 104 then ‘Skipped’
when 105 then ‘Orphaned’
when 106 then ‘Aborted’
when 107 then ‘Externally Defined’
when 108 then ‘Timed Out’
when 109 then ‘Cancelled’
end as jobrun_status_desc,
–jr.jobrun_proddt as run_date, — this field appears to be like jobrun_esttime but with less precision

jr.jobrun_launchtm,
jr.jobrun_lstchgtm,
DATEDIFF(mi,jobrun_launchtm,jobrun_lstchgtm) as run_duration,
conn.nodmst_alias,
conn.nodmst_name,
jr.jobrun_fromtm,
jr.jobrun_untiltm /*,
u.usrmst_name */
from jobrun jr
join jobmst jm on jr.jobmst_id=jm.jobmst_id
join nodmst conn on jr.nodmst_id=conn.nodmst_id
— join usrmst u on conn.nodmst_user=u.usrmst_id –conn.usrmst_id=u.usrmst_id
where
jm.jobmst_name not like ‘%<<some job name>>%’ and
JM.JOBmst_name like ‘%<<some job name>>%’ and
–jr.jobrun_esttime<GETDATE() and
–jm.jobmst_type in (8,2) and — 1=group; 2=job; 8=sql job
jm.jobmst_type in (8) and — 1=group; 2=job; 8=sql job
–jr.jobrun_status=97 and — 97=completed normally (gathering output)
–jr.jobrun_esttime between ‘20121022’ and ‘20121024’ and
jr.jobrun_status in (
–0, — Scheduled
1, — Waiting On Dependencies
2, — Waiting On Operator
3, — Held
5, — Timed Out For Day
7, — Agent Unavailable
8, — Agent Disabled
10, — Waiting on Group
11, — Waiting On Children
49, — Waiting On Resource
50, — Launched
51, — Active
52, — Stopped
53, — Deferred
66, — Error Occurred
97, — Completed Normally (Gathering Output)
98, — Completed Abnormally
99, — Externally Defined
101, — Completed Normally
103, — Completed Abnormally
104, — Skipped
105, — Orphaned
106, — Aborted
107, — Externally Defined
108, — Timed Out
109 — Cancelled
)
order by
— jr.jobrun_esttime desc
— DATEDIFF(mi,jobrun_launchtm,jobrun_lstchgtm) desc
–jr.jobrun_launchtm
JR.jobrun_lstchgtm

Posted in SQL, Tidal Enterprise Scheduler | Leave a comment