Tuesday, July 30, 2013

[how to] PostgreSQL kill - Sighup pid

[how to] PostgreSQL kill - Sighup pid


PostgreSQL kill - Sighup pid

Posted: 30 Jul 2013 08:41 PM PDT

" To reload the configuration files, we send the SIGHUP signal to the postmaster, which then passes that on to all connected backends. That's why some people call reloading the server "sigh-up-ing".

As reloading the configuration file is achieved by sending the SIGHUP signal, we can reload the configuration file just for a single backend using the kill command.

First, find out the pid of the backend using pg_stat_activity. Then, from the OS prompt, issue the following:

kill -SIGHUP pid "

I dont understand bold words. Because we have many pid for backkend and if we kill "pid" , how can it get changes from reload configuration file (postgresql.conf) ?

Many thanks.

Need advice for table design for multi-user access

Posted: 30 Jul 2013 03:33 PM PDT

I have an inventory application that needs to support multi-user. As of right now, only one user can access and manage their items. I've gotten a lot of requests to support multiple users so they can all manage the same inventory.

I have a table called user_items that stores item information. To keep it simple, I'll include just the relevant column names.

mysql> select primary_item_id, user_id, item_name from users_item limit 1;  +-----------------+---------+-----------------+  | primary_item_id | user_id | item_name       |  +-----------------+---------+-----------------+  |             100 |       4 | Stereo Receiver |  |             101 |       5 | Couch           |  +-----------------+---------+-----------------+  

I've created a mapping table to map the items to users.

+-------------+-------------+----------------+------------+-----------+  | map_user_id | map_item_id | unique_item_id | item_owner | privilege |  +-------------+-------------+----------------+------------+-----------+  |           4 |       100   |              1 |          1 |      NULL |  |          13 |       100   |              1 |          1 |      NULL |  |           5 |       101   |              1 |          5 |      NULL |  +-------------+-------------+----------------+------------+-----------+  

The unique_item_id column is the item_id that's displayed to the users. So item #1 for user #4 is "Stereo Receiver." Item #1 for user #5 would be a couch. The item_owner field doesn't mean much for the time being. I'm not sure if I need it but it's there for now as I play with the schema and code.

Anyways, this works fine, except I need multiple users to track the same item(s). Instead of offering the opportunity to share items AND track their own items, my version of "multi-user" means they have to track the same exact number of items. If user #13 adds a new item, user #4 also has access to said item.

Any suggestions? I think I shot myself in the foot by offering unique ids for each item. It is what it is so now I have to work with what I have.

ORDER BY items must appear in the select list [...]

Posted: 30 Jul 2013 02:28 PM PDT

Using Microsoft SQL Server 2008, I get the following error.

Msg 104, Level 16, State 1, Line 43
ORDER BY items must appear in the select list if the statement contains a UNION, INTERSECT or EXCEPT operator.

The query is I am using is kind of complex, but the CASE statement in side of the ORDER BY clause can not see the aliased column name, here is a brief example.

SELECT 1 AS foo, 2 AS bar  UNION ALL  SELECT 10 AS foo, 20 AS bar  ORDER BY CASE WHEN foo = 2 THEN 1 END;  

In my production query the left-query needs to be ordered by the column [360_set] found in the table, and the right-query needs to be ordered as if [360_set] was null.

How do I fix this error, and why does this syntax generate an error?

Here is the version info,

Microsoft SQL Server Management Studio     10.0.5512.0  Microsoft Analysis Services Client Tools   10.0.5500.0  Microsoft Data Access Components (MDAC)    6.1.7601.17514  Microsoft MSXML                            3.0 6.0   Microsoft Internet Explorer                9.10.9200.16635  Microsoft .NET Framework                   2.0.50727.5472  Operating System                           6.1.7601  

is it possible to pass the message from sp_add_alert to the job?

Posted: 30 Jul 2013 02:41 PM PDT

sp_add_alert stored procedure can react to different system messages and execute a job in response. It also can notify a person with the message text by email, pager of net send.

But how do I pass the sysmessage message (corresponding to the event that caused the alert) not to a person but to the job that is executed in response to the alert?

Let's consider this message:

select [description] from sysmessages where msglangid = 1033 and error = 829  

This will yeild:

Database ID %d, Page %S_PGID is marked RestorePending, which may indicate disk corruption. To recover from this state, perform a restore.

I'd like to receive this message in the job, so the job knows what %d and %S_PGID caused a problem.

Looking for one value in multiple tables and still return rows if finds it in any of those tables

Posted: 30 Jul 2013 01:15 PM PDT

SELECT a.id, a.Nome, d.email, e.ddd, e.telefone     FROM cadClientes as a   join cadCliente_ParamsPF as b on a.id = b.idCliente   join Enderecos as c on a.id = c.idCliente   join Emails_Clientes as d on a.id = d.idCliente   join Telefones_Clientes as e on a.id = e.idCliente   join Contatos_Clientes as f on a.id = f.idCliente     WHERE idTipoCliente = 1  ORDER BY a.id  

based on that query, I am lokking for a way to query a name in the cadClientes.name and Contatos_Clientes.name and return rows if we find the name either in cadClientes or Contatos_Clientes.

The problem is if I don´t have any row in Contatos_Clientes that refers to a client in cadClientes Sql Server return 0 rows.

Is there a way to implement that in one query or I would have to use multiple queries?

SHRINKFILE best practices and experience

Posted: 30 Jul 2013 01:27 PM PDT

Preamble: In general it's a big no-no, but believe me that are rare cases when space is really needed. For example Express Edition is limited to 10GB. Imagine that you discovered that with a data type conversion (a blob column) you can free up significant amount of space. But after that the DB file still has the same size as we know, and the 10GB limit also didn't change magically. So some kind of SHRINK is needed. That was an example.

In my test environment I performed:

DBCC SHRINKFILE (Envision, NOTRUNCATE)  DBCC SHRINKFILE (Envision, TRUNCATEONLY)  

That did the trick (I know that that minimizes the free space, in real world I would leave free space). it took many-many hours to finish. As we know it's a single threaded process Strange behaviour DBCC Shrinkfile, "it works as a series of very small system transactions so there is nothing to rollback." - Paul Randal http://www.sqlservercentral.com/Forums/Topic241295-5-1.aspx. We also know that it messes up the index fragmentation big time http://www.mssqltips.com/sqlservertip/2055/issues-with-running-dbcc-shrinkfile-on-your-sql-server-data-files/ and I can confirm that. I didn't experience log file grow though described in http://www.karaszi.com/SQLServer/info_dont_shrink.asp

I issued some INDEX REBUILD and REORGANIZE and those finished within seconds.

My questions:

  1. What's the big deal about shrink if I can just fix the index fragmentation after the shrink within seconds? I don't understand. Here on DBA stackexchange the "shrink" topic (http://dba.stackexchange.com/tags/shrink/info) says "Pretty much the worst thing you could do to a SQL Server database. In short: It sacrifices performance to gain space." plus refers to another popular article about it. But index fragmentation can be fixed.
  2. Why I didn't experience any log file growth?
  3. What if REBUILD the indx first after the space free-up operation. Can that substitute the first DBCC SHRINKFILE (Envision, NOTRUNCATE) so I just need to DBCC SHRINKFILE (Envision, TRUNCATEONLY)? I have a feeling that the two work on different logical level but I have to ask this.

Bottom line: I promise I won't do shrink regularly or anything. But this is a situation where a cap is hit and shrink is needed.

How do I get let SQL Server 2005 know that I changed host name in Amazon AWS server?

Posted: 30 Jul 2013 12:46 PM PDT

I am using an Amazon AWS SQL Server 2008 R2 server for dev purposes. In order to follow a new naming convention, we changed the name of the host. I then tried to let SQL Server know about this server name change, with the usual:

EXEC sp_dropserver '<oldname>'   GO   EXEC sp_addserver '<newname>', 'local'   GO  

But then SQL Server complains that server oldname does not exist.

When I do this: select @@SERVERNAME

I get back a host name that starts with 'IP-' and is then followed by some hex. Apparently Amazon does some funky DNS aliasing of some sort behind the scenes and comes up with its own internal name, even if I am using oldname and SQL Server itself thinks its oldname.

How do I let SQL Server know that the name of the server is now newname?

thanks aj

ORA-03113: end-of-file on communication channel Can not connect

Posted: 30 Jul 2013 12:24 PM PDT

This is the log file:

Errors in file E:\ORACLEXE\APP\ORACLE\diag\rdbms\xe\xe\trace\xe_arc0_5024.trc:  ORA-00313: open failed for members of log group 3 of thread 1  ORA-00312: online log 3 thread 1: 'D:\ORACLEBACKUP\XE\ONLINELOG\O1_MF_3_8M981VNW_.LOG'  ORA-27041: unable to open file  OSD-04002: unable to open file  O/S-Error: (OS 3) Das System kann den angegebenen Pfad nicht finden.  Errors in file E:\ORACLEXE\APP\ORACLE\diag\rdbms\xe\xe\trace\xe_lgwr_4064.trc:  ORA-00313: open failed for members of log group 1 of thread 1  ORA-00312: online log 1 thread 1: 'D:\ORACLEBACKUP\XE\ONLINELOG\O1_MF_1_8M981W22_.LOG'  ORA-27041: unable to open file  OSD-04002: unable to open file  O/S-Error: (OS 3) Das System kann den angegebenen Pfad nicht finden.  Errors in file E:\ORACLEXE\APP\ORACLE\diag\rdbms\xe\xe\trace\xe_lgwr_4064.trc:  ORA-00313: open failed for members of log group 1 of thread 1  ORA-00312: online log 1 thread 1: 'D:\ORACLEBACKUP\XE\ONLINELOG\O1_MF_1_8M981W22_.LOG'  ORA-27041: unable to open file  OSD-04002: unable to open file  O/S-Error: (OS 3) Das System kann den angegebenen Pfad nicht finden.  Errors in file E:\ORACLEXE\APP\ORACLE\diag\rdbms\xe\xe\trace\xe_ora_664.trc:  ORA-00313: open failed for members of log group 1 of thread   ORA-00312: online log 1 thread 1: 'D:\ORACLEBACKUP\XE\ONLINELOG\O1_MF_1_8M981W22_.LOG'  USER (ospid: 664): terminating the instance due to error 313  System state dump requested by (instance=1, osid=664), summary=[abnormal instance termination].  System State dumped to trace file E:\ORACLEXE\APP\ORACLE\diag\rdbms\xe\xe\trace\xe_diag_3556.trc  Dumping diagnostic data in directory=[cdmp_20130730170815], requested by (instance=1, osid=664), summary=[abnormal instance termination].  Instance terminated by USER, pid = 664  

Has somebody an idea to solve the problem?

I am working on Windows Server. Need any more information?

Which all system parameters to be considered for standard Vacuum process

Posted: 30 Jul 2013 05:25 PM PDT

We want to run standard vacuum process on our production database which is over 100 GB and have millions of dead tuples.

Can anyone suggest what all system parameters we need to keep in mind for setting Cost-based Vacuum settings. I mean like CPU/IO/Memory/Disk.

We cannot run vacuum full as database should be up and running continuously so we just want to attain most appropriate value without affecting system much.

Calculated Measure to get only most current from one dimension on snapshot fact but keep other filters

Posted: 30 Jul 2013 03:11 PM PDT

I'm working on a tabular cube in SSAS 2012 SP1 CU4. I have 3 dimensions (Requisition, Requisition Status, Date) and 1 fact (Requisition Counts). My fact table is at the grain of requisitionKEY, RequisitionStatusKEY, SnapshotDateKey.

I have calculated measures that essentially get the lastnonempty value (like a semi-additive measure) for the given period whether it is Year, Month, Or Date:

Openings:=CALCULATE(Sum('Requisition Counts'[NumberOfOpeningsQT]),   Filter('Date','Date'[DateKey] = Max('Requisition Counts'[SnapshotDateKEY])))    

This works well until you throw Requisition Status into the mix. I have rows for each requisition for every day in 2013. For one of the requisitions, the Requisition Status was Pending for the first 6 months and then it changed to Approved on all data from July 1 to date. When I summarize the number of openings for the requisition at the month level for July, users see two rows: the sum of the openings for the last populated day in July that it was pending and the sum of the openings for the last populated day in July that it was approved.
Pivot Table

Although the total of 2 is correct, I need to change this calculation so that I only get the most current requisition status for the date period selected (Approved) and either show 0 or null or blank for the Pending approval line in the pivot table.

The Requisition Status table looks like this: Requisition Status

Update: Here is a link to a PowerPivot model I made that has some scrubbed data in it to help answer this question. This should better explain the table schemas. The NumberOfOpeningsQT field is basically the number of open positions they have for that job. Sometimes it is 1, sometimes is is more than 1. It doesn't usually change over time, but I guess it could. I'm trying to make the Openings calculation give me the desired answer. I threw some other calculations in there so show some things I had tried that had promise but that I couldn't get to work.

Need to install Oracle Express 11g Release 2 on a Windows 7 64-bit laptop

Posted: 30 Jul 2013 06:45 PM PDT

I need the Oracle 11g Release 2 sample schemas (HR, OE, etc.) in order to do most of the available online tutorials. I was hoping to install Oracle Express Edition on my Windows 7 laptop to get these; but I have never heard of anybody successfully installing Oracle XE on a 64-bit Windows platform.

Is there a version of Oracle XE 11g R2 available for Windows 7? And if so, could you please point me to it?

Thanks...

Help my database isn't performing fast enough! 100M Merge with 6M need < 1 hour!

Posted: 30 Jul 2013 05:45 PM PDT

I have a server right now receiving more raw data files in 1 hour then I can upsert (insert -> merge) in an hour.

I have a table with 100M (rounded up) rows. Table is currently MyISAM. The table has 1000 columns mostly boolean and a few varchar.

Currently the fastest way i've found to get the information into my DB until now was:

Process raw data into CSV files. Load Data In File to rawData Table. Insert rawData table into Table1. (on dupe key do my function) Truncate rawData Repeat. Worked fine until im merging 6M+ Rows into 100M rows and expecting it to take under an hour.

I got 16G of ram so I set my Key_Buffer_Pool to 6G. I have my query cache pool to 16M I have my query cache limit to 10M I would just replace the information however it has to be an Upsert, Update the fields that are true if exists and insert if it does not.

Things im looking into atm; - Possibly switching server table to InnoDB? |-> Not sure about the performance, as the insert into an empty table is fine, its the merge that's slow.

Maybe allowing more table cache? Or even Query Cache? mysql sql mysqli innodb myisam

Merge Code:

b.3_InMarket = (b.3_InMarket OR r.3_InMarket),

To compare my 2 bool columns.

Update

  • Ok I set Raid0
  • Changed my query to Lock Write on tables when inserting
  • When importing csv im disabling keys then re-enabling them before upsert.
  • Changed concurrent_insert to 2

How un-clustered is a CLUSTER USING table

Posted: 30 Jul 2013 12:51 PM PDT

I have some tables which benefit greatly from CLUSTER ON/CLUSTER USING in Postgres SQL:

# CLUSTER table USING index_name;  # ANALYZE VERBOSE table;  # CLUSTER VERBOSE;  

A maintenance task periodically runs CLUSTER VERBOSE to keep things fresh. But is there a test I can run to see how fragmented the table is, prior to running CLUSTER VERBOSE? Maybe something like:

# CLUSTER ANALYZE  table 40000 records. 4000 observed clusters, 5000 potential clusters (20% fragmentation)  

Note that I use CLUSTER so data accessed at the same time is "defragmented" into a small number of disk blocks. For example I have thousands of attributes that go with each page. a CLUSTER page_attribute USING page_id; puts all the attributes next to each other, greatly reducing disk load.

Bitmask Flags with Lookup Tables Clarification

Posted: 30 Jul 2013 07:45 PM PDT

I've received a dataset from an outside source which contains several bitmask fields as varchars. They come in length as low as 3 and as long as 21 values long. I need to be able to run SELECT queries based on these fields using AND or OR logic.

Using a calculated field, where I just convert the bits into an integer value, I can easily find rows that match an AND query, by using a simple WHERE rowvalue = requestvalue, but the OR logic would require using bitwise & in order to find matching records.

Given that I would need to work with several of these columns and select from hundreds of millions of records, I feel that there would be a huge performance hit when doing bitwise & operations to filter my SELECT results.

I came across this answer from searching and it looked like it may fit my needs, but I need some clarification on how it is implemented.

Is this as simple as creating a lookup table that has all possible search conditions?

Example for 3 bits using (a & b) (Edit: Wrong bitwise op)

001,001  001,011  001,101  001,111  010,010  010,011  010,110  011,011  011,111  etc  

The author mentions that it's counter-intuitive initially, but I can't help but feel I'm interpreting the solution incorrectly, as this would give me a single lookup table with likely billions of rows.

Any clarifications on the answer I linked above or other suggestions that would preserve the existing database are appreciated.

Edit: A more concrete example using small data.

Four flags, HasHouse,HasCar,HasCat,HasDog, 0000 is has none, 1111 is has all.

Any number of flags, from all to none, can be flipped, and results must be filtered where selection matches all (Using exact value comparison) or at least 1 (Using bitwise &).

Adding a single calculated column for each bitmask is ok, but adding a column for each bit for more than 100 bits, coupled with how to insert/update the data is why I'm trying to find alternative solutions.

SQL Server 2012 catalog.executions to sysjobhistory - any way to join them?

Posted: 30 Jul 2013 01:45 PM PDT

I have exhausted my resources and can't find a foolproof way to join the ssisdb.catalog tables to the jobs that run them. Trying to write some custom sprocs to monitor my execution times and rows written from the catalog tables, and it would be greatly beneficial to be able to tie them together with the calling job.

SQLite writing a query where you select only rows nearest to the hour

Posted: 30 Jul 2013 04:45 PM PDT

I've got a set of data where data has been taken approximately every minute for about three month and the time has been stored as a unix timestamp. There is no regularity to the timestamp (i.e. the zero minute of the hour may not contain a reading, 00:59:55 and the next measurement could be 01:01:01) and days may be missing.

What I need is the row nearest to the hour, with the timestep rounding to the hour, as long as the nearest value is not more than 30 minutes away from the hour.

Where a matching hour could not be found it would be helpful if the query could include a time but no value.

I realise I'm asking a lot, but this would be incredibly helpful Thanks for taking the time to read this. James

BTW, The table is just PK (autoincrement),timestamp,value, sensor id(FK). I've tried this to get the data out:

SELECT strftime('%S',time, 'unixepoch'),strftime('%M',time, 'unixepoch'),strftime('%H',time, 'unixepoch'), strftime('%d',time, 'unixepoch'), strftime('%m',time, 'unixepoch'), strftime('%Y',time, 'unixepoch'), value from Timestream where idSensor=359;  

Breaking Semisynchronous Replication in MySQL 5.5

Posted: 30 Jul 2013 08:46 PM PDT

I've set up Semisynchronous Replication between two MySQL 5.5 servers running on Windows 7.

My application is running and updating the database of the master server and same is being updated in the slave database server.

But due to some unknown reasons sometimes, Replication breaks.

On running the command:

SHOW STATUS LIKE 'Rpl_semi_sync%';  

It gives this status:

'Rpl_semi_sync_master_no_times', '0'  'Rpl_semi_sync_master_no_tx', '0'  'Rpl_semi_sync_master_status', 'ON'     <<-------------  'Rpl_semi_sync_master_timefunc_failures', '0'  'Rpl_semi_sync_master_tx_avg_wait_time', '338846'  'Rpl_semi_sync_master_tx_wait_time', '29479685'  'Rpl_semi_sync_master_tx_waits', '87'  'Rpl_semi_sync_master_wait_pos_backtraverse', '0'  'Rpl_semi_sync_master_wait_sessions', '0'  'Rpl_semi_sync_master_yes_tx', '3106'  

Ideally, in semi synchronization, when the sync breaks the status should come as OFF since master is not able to receive any acknowledgement from the slave. Please help us in this regard.

Thought about this SQL Server backup plan?

Posted: 30 Jul 2013 01:01 PM PDT

I just started a new job, and I'm reviewing the database maintenance plan. I've got quite a bit of experience writing SQL, but not much experience with DB administration. My last job was at a large company, and they didn't let regular people touch that sort of stuff.

We are locked into SQL Server 2000 (it's embedded in some quite old software and we can't upgrade yet). The current maintenance plan (Full Recovery model) does the following:

Every hour from 6am - 11pm:

  1. backup log Accounting to Accounting_Logs with noinit

Every night at 1am, this happens:

  1. backup Log Accounting WITH TRUNCATE_ONLY
  2. DBCC SHRINKDATABASE (Accounting, TRUNCATEONLY)
  3. backup database Accounting_ReadOnly to Accounting with init

Then at 3am:

  1. all the indexes are rebuilt

Is this a decent plan? Will this give us good backups that are easy to recover? I know I'm asking for a lot, but any thought/comments/suggestions would be appreciated.

Please let me know if you need more information. Thanks!

Can I use a foreign key index as a shortcut to getting a row count in an INNODB table?

Posted: 30 Jul 2013 12:33 PM PDT

I have a table that has a large number of rows in it.

The primary key (an auto-incrementing integer) is, by default, indexed.

While waiting for a row count to be returned I did an EXPLAIN in another window and the the results were as follows:

mysql> SELECT COUNT(1) FROM `gauge_data`;  +----------+  | COUNT(1) |  +----------+  | 25453476 |  +----------+  1 row in set (2 min 36.20 sec)      mysql> EXPLAIN SELECT COUNT(1) FROM `gauge_data`;  +----+-------------+------------+-------+---------------+-----------------+---------+------+----------+-------------+  | id | select_type | table      | type  | possible_keys | key             | key_len | ref  | rows     | Extra       |  +----+-------------+------------+-------+---------------+-----------------+---------+------+----------+-------------+  |  1 | SIMPLE      | gauge_data | index | NULL          | gauge_data_FI_1 | 5       | NULL | 24596487 | Using index |  +----+-------------+------------+-------+---------------+-----------------+---------+------+----------+-------------+  1 row in set (0.13 sec)  

Since the primary key is guaranteed to be unique, can I just take the number of rows from the EXPLAIN and use that as the row count of the table?

BTW, I believe the difference in numbers is due to the fact that more data is continually being added to this table.

Download SQL Server profiler for SQL Server Management Studio

Posted: 30 Jul 2013 01:02 PM PDT

How can I profile a SQL Server 2008 database to see code that's being executed on a particular database? I remember using the SQL Server profiler, but I don't see it in SQL Server Management Studio after downloading SQL Server 2008 R2 Express. Where can I download that tool and install it? Do I need the full version of SQL Server 2008 in order to see this option?

Slow queries on SQL Server [closed]

Posted: 30 Jul 2013 12:40 PM PDT

We have SQL Server 2005. Our main table is the archive table which has nearly 200 million rows in it. There are 2000 clients that connect a service so the service writes the information to the archive. We have also another service which gets the clients information from archive as batches and calculate some another information for each row and rewrite them as batches again.

On the webhand-side we have 100-200 users online at a time and most of the queries depends on archive table. I built all possible indexes on archive and I'm using .NET Framework 3.5. I am connecting the database with standard connection string.

The problem is when a user request for one day long report it returns in 10-15 seconds for 50 rows. The one month long reports take more time like 2-3 min for 5k-6k rows. I am not a DBA but we don't have one so i am expected to tackle this problem. Can you make any suggestions for my problem?

Thanks.

What is the easiest way to get started using databases with real data?

Posted: 30 Jul 2013 01:22 PM PDT

I have a project that could benefit from using a database, but I have no experience with databases, don't have access to a server, and have relatively little experience working with things living server-side.

If I'm going to have to tackle a learning curve, I'd prefer to learn something with broad applicability (such as SQL) but would settle for learning something like Access if it is sufficiently powerful for the task I'm currently trying to tackle. Of course, I'd also rather not drop $150 on Access if it can be helped since I'm just tinkering.

I've downloaded LibreOffice Base as well as something called SQLiteBrowser, but I wanted to check first before I invest time learning those particular applications and their flavors of SQL whether those tools will be sufficient for what I want to do.

I want to be able to:

  • import data from a CSV or from Excel
  • run queries that equate to "select x where this is that and this contains that and any of these contain that"
  • write(?) a new field which indicates those results which match a given query

Again, I'm willing to learn, but it would be nice not to have to learn a bunch of intermediate stuff about IT before I can focus on learning databases and, if necessary, the particulars of a given application.

How to retrieve the definition behind statistics added to tables

Posted: 30 Jul 2013 12:38 PM PDT

Is there a way to programmatically retrieve the definition of each STATISTICS added to table columns and indexes. For both user added and system created indexes. There are many STATISTICS like '__WA_Sys_*' that are added by Sql Server.

I need to re-write some of them and add more, but there are too many to do them manually with Management Studio.

Get failed SQL Server agent job?

Posted: 30 Jul 2013 01:38 PM PDT

How to get a list of failed job run last night? I only find the following powershell script. What's the SQL equivalence?

dir $psPath\Jobs | % { $_.EnumHistory() } | ? { $_.RunStatus -ne 1 }  

Where to start learning to understand SQL Server architecture and internals?

Posted: 30 Jul 2013 01:01 PM PDT

I have a basic knowledge of T-SQL and SQL Server components. My goal is to master my skills and learn everything about SQL Server to eventually become DBA in the future. I would like to understand deep SQL Server internals, how exactly everything works, when and why. Could you please suggest me a good place to start? IMHO it's just not possible by doing the programming work.

Upgrading Instances with Mirroring

Posted: 30 Jul 2013 12:24 PM PDT

If you want to upgrade or install a patch on two separate instances that house both the principal and mirrored database, how can you go about that? If the database that is being mirrored needs to be available 24/7 and you don't have a window to go offline, what is the best means to do this?

EDIT: this is with SQL Server.

Are heaps considered an index structure or are they strictly a table structure without index?

Posted: 30 Jul 2013 12:32 PM PDT

Inspired by this post: https://twitter.com/#!/SQLChicken/status/102930436795285505

Heaps: Are they considered an index structure or are they strictly a table structure without index?

The smallest backup possible ... with SQL Server

Posted: 30 Jul 2013 12:35 PM PDT

Daily we ship our SQL Server backups across the WAN. We need to minimize the size of these backups so it does not take forever.

We don't mind if our backup process takes a bit longer; as it stands we need to move 30gigs of compressed backup across the WAN that takes over 10 hours.

There are 2 options we have to get smaller daily backups.

  1. Log shipping, which would mean we would have to restructure DR process.
  2. Strip information out of the db and rebuild on the other side (drop non clustered indexes, pack clustered indexes at 100% - rebuild on the other side)

Both would involve a fair amount of work from our part. We are using SQL Server 2008 pro, all backups are compressed.

Are there any commercial products that can give us similar backup size to option (2)?

Is there a comprehensive script out there that will allow us to accomplish (2)? (handling indexed views, filtered indexes, foreign keys and so on)

Location of Maintenance Plan's Back Up Database Tasks information (SQL Server 2005)

Posted: 30 Jul 2013 05:52 PM PDT

I would like to know where in the database or on the file system the information about Back Up Database Task in the Maintenance Plans Tasks.

I can find the Job in msdb.dbo.sysjobs I can find the Subplan in msdb.dbo.sysmaintplan_subplans

But I need to find where and how the Task is being stored.

Any help will be greatly appreciated.

[SQL Server] Need assistance doing a PIVOT-like transformation on VARCHAR data.

[SQL Server] Need assistance doing a PIVOT-like transformation on VARCHAR data.


Need assistance doing a PIVOT-like transformation on VARCHAR data.

Posted: 30 Jul 2013 02:40 PM PDT

I am trying to write a query using the table below - this table comes from a vendor supplied system, so I can't modify it:[code="plain"][b]Item_ID, Tag[/b]ITEM1, BlueITEM1, WarmITEM2, GreenITEM3, CoarseITEM2, Fine[/code]There is a maximum of four Tag records for one Item. I want to get the result set below:[code="plain"][b]Item_ID, TAG1, TAG2, TAG3, TAG4[/b]ITEM1, Blue, Warm, NULL, NULLITEM2, Green, Fine, NULL, NULLITEM3, Coarse, NULL, NULL[/code]I have done this previously by creating a temp table with an ID column and the structure of the second table, inserting the select distinct Item_IDs, then using a while loop to iterate through the first table, updating the rows in the second where the second table's Item_ID matches the first's, but there isn't an existing Tag field with the value for that Item_ID.The problem with this solution is it means looping through the first table, then inside of that loop, looping through the second and updating where needed, which is very resource intensive. I've looked at the PIVOT command, but I can't find any samples with varchar values (the samples I've seen all have some sort of aggregation/count which I can't see how to adapt).Does anyone know any more efficient ways of doing the above transformation?

SQL Server 2005 and SQL Server 2012

Posted: 30 Jul 2013 09:56 AM PDT

About start Project with upgrade to IPv6. Need advice on how SQL Server is affected to the 2005 and 2005 if there is a difference. What need to be tested? and or changed? Know the question is vague, but any help will be appreciated. Thks in advance.

SQL Server 2005 system databases

Posted: 30 Jul 2013 09:50 AM PDT

SQL Server 2005;Was wondering if I can point the system databases to another set of system database files. Such as I have a set start up is pointing to on D:\ and I want them to point to a set on F:\. Is this possible and how or what steps should I take. If get them pointing to the F:\ I can delete the files on D:\. Any other info needed let me know.

need help with applying function within Query

Posted: 30 Jul 2013 12:48 AM PDT

Hi allIts my first post here! Glad to be here.Hopefully someone can help me with the following.If you look at the attach screenshot, you will see the 3 SQL table i am working with along with the query i will need to run on these tables. The problem is that in the "CuRRENTS" table, the first column is in "Ticks" format instead of DATETIME! so in my query i need to get the values from that column (Timestamp_ID) to DATETIME format. Now by looking at theat SQL Database, i saw 2 fucntions (see attached notepads) that may have been created to do exactly that: converting that column into DATETIME. The problem is that i dont know how i would use them into my query.Is there anyone that could help?thanks in advance

[Articles] Flying high on the Big Data hot-air

[Articles] Flying high on the Big Data hot-air


Flying high on the Big Data hot-air

Posted: 29 Jul 2013 11:00 PM PDT

Today Phil Factor talks about Big Data, and all the hype that's in the news.

SQL DBA Bundle Top 5 Hard-earned Lessons of a DBA
'10 Tips for Efficient Disaster Recovery' by Steve Jones. Prepare for any future disaster by reading Steve's tips today.

[SQL 2012] New Guy

[SQL 2012] New Guy


New Guy

Posted: 29 Jul 2013 01:31 PM PDT

Hi everyone,I am new to the forums and to SQL, so I am saying hello! I'm looking to develop my abilities with SQL due to personal interest and due to my interest in pursuing a new career path. I've gone through some guides and examples on SQLzoo.net and a few other sites. My next step will be reading some books (70-461, Querying Microsoft SQL Server 2012 by Itzik Ben-Gan and Microsoft SQL Server 2012 T-SQL Fundamentals by Itzik Ben-Gan), installing SQL Server 2012 (trial), working through some examples, and most importantly meeting you guys, the experts. Any advice is appreciated. Cheers!-Bob

LINKED SERVER

Posted: 30 Jul 2013 01:44 AM PDT

Hi,I have a 64-bit Windows 7 ultimate server with SQL srv 2008 R2 (64 bit). I am trying to pull data from a Sybase database by using Linked Server in SQL srv. so, I have created ODBC DSN for SYbase connection. DSN works perfectly when connecting from ODBC DATASOURCES(64BIT)..Im trying to create LinkedServer in Sql using wizard, but I am getting following error msg,"[Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified"...I tried with 32 bit ODBC dsn also, but no use.Can anyone experienced and solved this?

The defination of object has changed since it was complied

Posted: 29 Jul 2013 06:39 PM PDT

Hi all, I am facing a weird problem since couple of days, When we tried to execute the any SP (Randomly) sometimes throws the error "The definition of object has changed since it was compiled" and if I recompile it or alter the procedure it works fine.My Doubts : 1. First of all why this error occurs and how can I tackle this.2. Is there any auto process I can set, so this problem never occurs.3. I read somewhere in the forum that sp_recompile running of whole database will solve it, but running it every time is right approach ? as this will clear the execution plan of that object.Thanks,Bhaskar Shetty

SQL server cluster Failover installation

Posted: 30 Jul 2013 01:16 AM PDT

Hi guys i am supposed to do a failover installation on Windows server.I have to create a cluster group, Do any one know any site or video or notes which guides me to get prepared.Rookie here :hehe:

Can many or a group of Transaction Logs be automatically restored?

Posted: 29 Jul 2013 06:42 AM PDT

We are running SQL Server 2012 SP1 64-Bit EE on Windows Server 2008 R2 SP1. We are taking a full backup on Sunday night, a diff backup Monday through Saturday night, and trans log backups Monday through Friday every 10 minutes from 7 A.M. to 6 P.M. If we have to recover the database at 5 P.M. (for Monday through Friday), then we would have to manually run a restore of the tlogs for every 10 minutes up until 5 P.M. That would be about 60 restore commands to restore all tlog backups between 7 A.M. and 5 P.M.With SQL Server, is there a way to automatically restore multiple or a group of tlog backups? (Oracle automatically recovers a large number of archive logs with RECOVER DATABASE USING BACKUP CONTROL FILE UNTIL CANCEL;).Thanks in advance, Kevin

Table Partition

Posted: 29 Jul 2013 09:45 PM PDT

Hi All,I have partitioned VOICE_CALL table, size=130 GB(spill-ted into 10 ndf files) But before partition mdf files size 350 GB,After partition mdf file size 351 GB. 10 Ndf files size 99 GB so total database size is 449 GB [b]Why mdf file size not decreased ? even-though I was moved data to ndf files ?[/b]Thanks,Ramana

RSAT Installation Issue.

Posted: 29 Jul 2013 11:10 PM PDT

Hello,I am trying to install remote server administration tools to then install Hyper-V, to then install Windows and then finally some SQL 2012 instances.The issue though is right at the start with RSAT, it just wont load on Windows 7 with SP1, this appears to have been an issue for ages, I have tried so many ways to get it done, including all the CMD options. Has anyone here got any experience of this? I cant uninstall SP1 to load RSAT because SP1 is built into my build. I'm sure this may be off topic to some, but I figure there must be some other DBA's that have encountered this issue. Any help is greatly appreciated, I'm going nuts!Thanks,D.

Using non-deterministic T-SQL functions inside a UDF

Posted: 29 Jul 2013 02:47 PM PDT

Is it even possible to use a non-deterministic function like RAND() inside a UDF? If so, how?I'm trying to write a function to generate semi-random numbers, and it lands flat on its face...[code="sql"]CREATE FUNCTION dbo.GetRandomNumber (@Lower int, @Upper int)RETURNS intWITH SCHEMABINDING ASBEGIN DECLARE @Random int SELECT @Random = ROUND(((@Upper - @Lower -1) * RAND() + @Lower), 0) RETURN(@Random)END[/code]I get this:Msg 443, Level 16, State 1, Procedure GetRandomNumber, Line 7Invalid use of a side-effecting operator 'rand' within a function.If I'm reading it right (no guarantees, mind you!), I can't do this. If I'm inside a stored procedure, I can generate the numbers just fine... so I guess I could do that if I needed to.So is there any way to get SQL Server to return a value for a non-deterministic T-SQL function to a Function I created?Thanks

Inserted xml data in to Sql server records

Posted: 29 Jul 2013 05:36 PM PDT

Please anyone can help me to insert xml file data in to SQL ServerEx : <ROOT><Customers><Customer CustomerID="C001" CustomerName="Arshad Ali"><Orders><Order OrderID="10248" OrderDate="2012-07-04T00:00:00"><OrderDetail ProductID="10" Quantity="5" /><OrderDetail ProductID="11" Quantity="12" /><OrderDetail ProductID="42" Quantity="10" /></Order></Orders><Address> Address line 1, 2, 3</Address></Customer><Customer CustomerID="C002" CustomerName="Paul Henriot"><Orders><Order OrderID="10245" OrderDate="2011-07-04T00:00:00"><OrderDetail ProductID="11" Quantity="12" /><OrderDetail ProductID="42" Quantity="10" /></Order></Orders><Address> Address line 5, 6, 7</Address></Customer><Customer CustomerID="C003" CustomerName="Carlos Gonzlez"><Orders><Order OrderID="10283" OrderDate="2012-08-16T00:00:00"><OrderDetail ProductID="72" Quantity="3" /></Order></Orders><Address> Address line 1, 4, 5</Address></Customer></Customers></ROOT>Please tell me how can we insert data in to sql recordsRaghu KotamSr.SQL Developer

newbie

Posted: 29 Jul 2013 07:41 AM PDT

Hi guys I am a newbie. Will you guys be my friend?

The view contains a convert that is imprecise or non-deterministic error when trying to create a clustered index on a view

Posted: 29 Jul 2013 07:17 AM PDT

So I am attempting to create a clustered index on a view. I am trying to create the index off of the following fields (order_date (datetime), offer_code varchar(3), and demandtype varchar(1)).When I try and created the clustered index I receive the following error: Cannot create index or statistics 'IX_vw_sale_demand' on view 'vw_sale_demand' because key column 'INDATE' is imprecise, computed and not persisted. Consider removing reference to column in view index or statistics key or changing column to be precise. If column is computed in base table consider marking it PERSISTED there.Now the order_date is stored as a decimal(10,0) in the base table and I am converting it to a datetime in my view. From what I have read - the error happens mostly when using float values. Any ideas on how to fix this? Thanks for the help in advance - will continue to search for an answer and will post if I find one......

NETWORK SERVICE cant read system view

Posted: 16 Jul 2013 07:08 AM PDT

I'm trying to run the following query from a Web Service: SELECT last_user_update FROM sys.dm_db_index_usage_statsI get an error when I try this, saying that the current user does not have permissions. Here's what I know;- The web service runs as NT AUTHORITY\NETWORK SERVICE- NT AUTHORITY\NETWORK SERVICE has the "public" role on the database- The view sys.dm_db_index_usage_stats has two SELECT permission options, one with a blank grantor and one with "dbo" as a grantor. "public" is given access to the one with dbo as the grantor - I tried to check the other select box, but SQL quietly unchecks it when I close the window, so I'm basically not able to change the permissions on this view.Is there a way that I can grant access to sys.dm_db_index_usage_stats for NT AUTHORITY\NETWORK SERVICE?or... Is there another way I can discover the last access time on a table that does not require access to sys.dm_db_index_usage_stats?

Configuring Database Mail with an SQL Server Agent Operator with multiple email addreses.

Posted: 29 Jul 2013 07:26 AM PDT

We are running SQL Server 2012 SP1 64-Bit EE on Windows Server 2008 R2 SP1. (To create an Operator Group, go to SQL Server Agent, Operators, DBA Group Properties and specify 'DBA Group' beside the 'Name:' Box and the email addresses in the 'E-mail Name:' Box.) Use the code below to associate the Operator with email addresse. The first piece of code updates the Operator with an Exchange Distribution Email Address (sqldba@school.net). The Exchange Distribution Email Address points to 3 email addresses. This works fine. USE [msdb]GOEXEC msdb.dbo.sp_update_operator @name=N'DBAGroup1', @enabled=1, @pager_days=0, @email_address=N'sqldba@school.net', @pager_address=N'', @netsend_address=N''GOThe second piece of code updates the Operator with 3 individual email addresses. With the Operator set up this way, it does not work; the email is not sent. The Error Message is listed below (I have tried the email addresses with a comma in between.)USE [msdb]GOEXEC msdb.dbo.sp_update_operator @name=N'DBAGroup2', @enabled=1, @pager_days=0, @email_address=N'TestEmail1@school.net;TestEmail2@school.net;TestEmail3@school.net', @pager_address=N'', @netsend_address=N''GO--NOTE: --The below code does NOT WORK when sending to a SQL Server Agent Operator's --Group (i.e. DBAGroup2) with more than one email address.--Send_Mail_via_DBMail.sqlEXEC msdb.dbo.sp_send_dbmail@recipients='DBAGroup2', --(Does Not Work)@body= 'Test Email Body', @subject = 'Test Database Email',@profile_name = 'TestDB Administrator Profile',@file_attachments = 'D:\DBA Scripts\AlterDatabases\Alter_Database_Options.sql'This is the Error Message, located in the Database Mail Log is:"The mail could not be sent to the recipients because of the mail server failure. (Sending Mail using Account 1 (2013-07-29T16:09:15). Exception Message: Cannot send mails to mail server. (A recipient must be specified.)."I thought I had this working via the GUI (SQL Server Agent, double click a Job, click the Notification Page) with an older version of SQL Server (2005 I believe). I had Jobs set up that would send emails to an Operator Group which contained multiple individual Microsoft Exchange Email Addresses verses an Exchange Distribution Group. And thoughts? With SQL Server 2012 must we now use an Exchange Distribution Email?Thanks in advance, Kevin

Missing OraOLEDB provider

Posted: 29 Jul 2013 06:05 AM PDT

I need to be able to set up Linked Servers on our SQL Server 2012 Server. When I try to set up a new Linked server... I do not see the Oracle provider. I installed 64 bit Oracle client.... no go. I then installed 32 bit Oracle client in addition to 64 bit.... no go. I tried a Regedit hack I found on Google.... no go. I rebooted the server after each of the above attempts. Has anyone found a solution to this issue? Thank you in advance.Charlie

Error Log File Viewer

Posted: 29 Jul 2013 05:01 AM PDT

I've just configured a SQL Server 2012 and when I go to view the SQL Server Error Log through the log file Viewer the interface says No log source. By looking at the startup parameters I see the log file location and can see there are log files. It also lets me look at the log files through the file management system. Why doesn't the Log File Viewer work?

[T-SQL] Salesman Running Totals by Date problem...

[T-SQL] Salesman Running Totals by Date problem...


Salesman Running Totals by Date problem...

Posted: 29 Jul 2013 09:19 PM PDT

Hi All,SQL Server 2008.I have the following problem I need to try and solve...I have a list of salesmen and I need to return back running totals (grouped by salesman) for each from a start date until an end date with all dates in the date range returned...The current structure if simplified for this example (entities are as is, but I've removed a lot of the attributes I don't require here)...Sales_ManSalesmanID intName varchar(50)OrderOrderID intSalemanID intOrderDate DateTimeOrder_LineOrderLineID intOrderID intOrderLineValue moneyI figured I need a calendar table for this so I return back rows for each salesman regardless if they made any sales...Tally_DateDateFull DatetimeWhere I am so far...SELECT A.[DateFull], COALESCE(t.[SalesmanName], '') AS [SalesmanName], COALESCE(t.[TotalSales], 0.00) AS [TotalSales]FROM [Tally_Date] A OUTER APPLY ( SELECT ISNULL(SUM(A.[OrderLineValue]), 0.00) AS [TotalSales], ISNULL(C.[SalesmanName], '') AS [SalesmanName] FROM [Order_Line] A LEFT OUTER JOIN [Order] B ON A.[OrderID] = B.[OrderID] LEFT OUTER JOIN [SalesMan] C ON B.[SalesmanID] = C.[SalesmanID] WHERE A.[OrderDate] <= [DateFull] GROUP BY ISNULL(C.[SalesmanName], '') ) AS tWHERE A.[DateFull] BETWEEN @StartDate AND @EndDate)ORDER BY [DateFull] ASC, [SalesmanName]This is return all the sales force with running totals, but salesmen only appear in the result set once they have at least one sale. If I completely remove the SalesManName studd, I get a a full set of dates with a running total for all salesman correctly but as soon as I try to group by salesman it all goes wrong...Once I get this working, I'd like to also try returning back weekly and monthly totals in a separate query instead of by individual date...Can some one help as I've been banging my head against a wall on this for hours?Many thanksCharlotte

Email Step Logs Concatenated

Posted: 30 Jul 2013 12:35 AM PDT

Hi, I am fairly new to SQL Server, but I wanted to ask a question about step logs and email.I have 1 job with 5 steps, because I was tired of getting 5 emails, along with the other 150 I get a day, so I condensed it into one. Each step backs-up a database. I get emailed when the last step completes successfully or if the job fails which there is really no detail in that.My question is, is there any way to take the log from each of the 5 steps, concatenate it, and send it by email. It would save a lot of time of digging through the tables of logs.I didn't know if this was even possible, but I wanted to ask.This is the code I am using to backup a single database as step, just repeated in a different step.---------------------------------------------------------------------------------------------------------------------------------------------------------DECLARE @Path NVARCHAR(1000), @FileName NVARCHAR(255), @FullPath NVARCHAR(1255) , @dateString CHAR(8), @dayStr CHAR(2), @monthStr CHAR(2)--month variableIF (SELECT LEN(CAST(MONTH(GETDATE()) AS CHAR(2))))=2 SET @monthSTR=CAST(MONTH(GETDATE()) AS CHAR(2))ELSE SET @monthSTR= '0' + CAST(MONTH(GETDATE()) AS CHAR(2)) --day variableIF (SELECT LEN(CAST(DAY(GETDATE()) AS CHAR(2))))=2 SET @daySTR=CAST(DAY(GETDATE()) AS CHAR(2))ELSE SET @daySTR='0' + CAST(DAY(GETDATE()) AS CHAR(2))--Assemble Date FormatSET @dateString=CAST(YEAR(GETDATE()) AS CHAR(4)) + @monthStr + @dayStr-- Set Path for StorageSET @Path = 'E:\Database_Backups\AMRMVP\'--Set Database NameSET @FileName = 'AMRMVP_' + @dateString + '.bak'SET @FullPath = @Path + @FileName--Start Backup of the databaseBACKUP DATABASE AMRMVPTO DISK = @FullPathWITH INIT---------------------------------------------------------------------------------------------------------Any help is greatly appreciated.Thanks,Mark

Create sum closest to an integer

Posted: 30 Jul 2013 12:22 AM PDT

Hi,I need to create a function that will check an int column from a table and has to generate the best combinations to create a sum closest to an integer given.For example: the integer given is 10the int column will have the following records : 5,6,3,4,3.The function should return: 5,4 and 6,3Thank you for any help!

Find the first record for each month

Posted: 29 Jul 2013 01:31 AM PDT

I'm trying to pull out the file size and backup size for each database using the msdb backup history table. I want to get the 1st record for each month (due to the vagaries of the backup system, there may not always be a backup on the 1st - or there may be more than one !)This is what I have so far (I think I want to use partition by, to get rank #1 of the 1st record and go from there, but am unsure how to express that I want to partition by month ??? )Any suggestions appreciated !!! (especially if I'm on completely the wrong track :hehe: )[code="plain"]select backup_start_date as [Date], ((backupfile.backup_size/1024)/1024) as [Database Size],((backupfile.file_size/1024)/1024) as [File Size] ,RANK() OVER (PARTITION BY backup_start_date ORDER BY backup_start_date) AS Rankfrom msdb..backupfile, msdb..backupset where backupset.database_name = 'mydatabasename' and file_type = 'D' and backupfile.backup_size > 0 and backupfile.backup_set_id = backupset.backup_set_id order by backup_start_date[/code]

trigger to avoid disable or delete a job

Posted: 29 Jul 2013 06:53 PM PDT

Hi Friends,Let us assume that I have given SQLAgentOperator role to XXX Login. He has full permission of deleting a job, creating a job, disabling a job & enabling a job. Is there any trigger which avoids disabling of job, deleting a job, enabling a job & disabling a job. Regards,Sundar S

Find the last 6 Tuesdays or Wed or Whatever day.

Posted: 29 Jul 2013 04:55 AM PDT

I have a sales report that shows the previous days sales. I need to modify to a rolling report that shows the previous 6 of that day of a week. So Monday shows the pervious 6 Monday's, Tuesday previous 6 Tuesdays. I suppose I could put it all in a static table and keep it there and then very seventh week delete the oldest week but there should but should be something easier than keep all data sitting in a table.Any thoughts or ideas how to find the dates for the previous instances of the day of the week?

Search This Blog