Tuesday, July 30, 2013

[how to] PostgreSQL kill - Sighup pid

[how to] PostgreSQL kill - Sighup pid


PostgreSQL kill - Sighup pid

Posted: 30 Jul 2013 08:41 PM PDT

" To reload the configuration files, we send the SIGHUP signal to the postmaster, which then passes that on to all connected backends. That's why some people call reloading the server "sigh-up-ing".

As reloading the configuration file is achieved by sending the SIGHUP signal, we can reload the configuration file just for a single backend using the kill command.

First, find out the pid of the backend using pg_stat_activity. Then, from the OS prompt, issue the following:

kill -SIGHUP pid "

I dont understand bold words. Because we have many pid for backkend and if we kill "pid" , how can it get changes from reload configuration file (postgresql.conf) ?

Many thanks.

Need advice for table design for multi-user access

Posted: 30 Jul 2013 03:33 PM PDT

I have an inventory application that needs to support multi-user. As of right now, only one user can access and manage their items. I've gotten a lot of requests to support multiple users so they can all manage the same inventory.

I have a table called user_items that stores item information. To keep it simple, I'll include just the relevant column names.

mysql> select primary_item_id, user_id, item_name from users_item limit 1;  +-----------------+---------+-----------------+  | primary_item_id | user_id | item_name       |  +-----------------+---------+-----------------+  |             100 |       4 | Stereo Receiver |  |             101 |       5 | Couch           |  +-----------------+---------+-----------------+  

I've created a mapping table to map the items to users.

+-------------+-------------+----------------+------------+-----------+  | map_user_id | map_item_id | unique_item_id | item_owner | privilege |  +-------------+-------------+----------------+------------+-----------+  |           4 |       100   |              1 |          1 |      NULL |  |          13 |       100   |              1 |          1 |      NULL |  |           5 |       101   |              1 |          5 |      NULL |  +-------------+-------------+----------------+------------+-----------+  

The unique_item_id column is the item_id that's displayed to the users. So item #1 for user #4 is "Stereo Receiver." Item #1 for user #5 would be a couch. The item_owner field doesn't mean much for the time being. I'm not sure if I need it but it's there for now as I play with the schema and code.

Anyways, this works fine, except I need multiple users to track the same item(s). Instead of offering the opportunity to share items AND track their own items, my version of "multi-user" means they have to track the same exact number of items. If user #13 adds a new item, user #4 also has access to said item.

Any suggestions? I think I shot myself in the foot by offering unique ids for each item. It is what it is so now I have to work with what I have.

ORDER BY items must appear in the select list [...]

Posted: 30 Jul 2013 02:28 PM PDT

Using Microsoft SQL Server 2008, I get the following error.

Msg 104, Level 16, State 1, Line 43
ORDER BY items must appear in the select list if the statement contains a UNION, INTERSECT or EXCEPT operator.

The query is I am using is kind of complex, but the CASE statement in side of the ORDER BY clause can not see the aliased column name, here is a brief example.

SELECT 1 AS foo, 2 AS bar  UNION ALL  SELECT 10 AS foo, 20 AS bar  ORDER BY CASE WHEN foo = 2 THEN 1 END;  

In my production query the left-query needs to be ordered by the column [360_set] found in the table, and the right-query needs to be ordered as if [360_set] was null.

How do I fix this error, and why does this syntax generate an error?

Here is the version info,

Microsoft SQL Server Management Studio     10.0.5512.0  Microsoft Analysis Services Client Tools   10.0.5500.0  Microsoft Data Access Components (MDAC)    6.1.7601.17514  Microsoft MSXML                            3.0 6.0   Microsoft Internet Explorer                9.10.9200.16635  Microsoft .NET Framework                   2.0.50727.5472  Operating System                           6.1.7601  

is it possible to pass the message from sp_add_alert to the job?

Posted: 30 Jul 2013 02:41 PM PDT

sp_add_alert stored procedure can react to different system messages and execute a job in response. It also can notify a person with the message text by email, pager of net send.

But how do I pass the sysmessage message (corresponding to the event that caused the alert) not to a person but to the job that is executed in response to the alert?

Let's consider this message:

select [description] from sysmessages where msglangid = 1033 and error = 829  

This will yeild:

Database ID %d, Page %S_PGID is marked RestorePending, which may indicate disk corruption. To recover from this state, perform a restore.

I'd like to receive this message in the job, so the job knows what %d and %S_PGID caused a problem.

Looking for one value in multiple tables and still return rows if finds it in any of those tables

Posted: 30 Jul 2013 01:15 PM PDT

SELECT a.id, a.Nome, d.email, e.ddd, e.telefone     FROM cadClientes as a   join cadCliente_ParamsPF as b on a.id = b.idCliente   join Enderecos as c on a.id = c.idCliente   join Emails_Clientes as d on a.id = d.idCliente   join Telefones_Clientes as e on a.id = e.idCliente   join Contatos_Clientes as f on a.id = f.idCliente     WHERE idTipoCliente = 1  ORDER BY a.id  

based on that query, I am lokking for a way to query a name in the cadClientes.name and Contatos_Clientes.name and return rows if we find the name either in cadClientes or Contatos_Clientes.

The problem is if I don´t have any row in Contatos_Clientes that refers to a client in cadClientes Sql Server return 0 rows.

Is there a way to implement that in one query or I would have to use multiple queries?

SHRINKFILE best practices and experience

Posted: 30 Jul 2013 01:27 PM PDT

Preamble: In general it's a big no-no, but believe me that are rare cases when space is really needed. For example Express Edition is limited to 10GB. Imagine that you discovered that with a data type conversion (a blob column) you can free up significant amount of space. But after that the DB file still has the same size as we know, and the 10GB limit also didn't change magically. So some kind of SHRINK is needed. That was an example.

In my test environment I performed:

DBCC SHRINKFILE (Envision, NOTRUNCATE)  DBCC SHRINKFILE (Envision, TRUNCATEONLY)  

That did the trick (I know that that minimizes the free space, in real world I would leave free space). it took many-many hours to finish. As we know it's a single threaded process Strange behaviour DBCC Shrinkfile, "it works as a series of very small system transactions so there is nothing to rollback." - Paul Randal http://www.sqlservercentral.com/Forums/Topic241295-5-1.aspx. We also know that it messes up the index fragmentation big time http://www.mssqltips.com/sqlservertip/2055/issues-with-running-dbcc-shrinkfile-on-your-sql-server-data-files/ and I can confirm that. I didn't experience log file grow though described in http://www.karaszi.com/SQLServer/info_dont_shrink.asp

I issued some INDEX REBUILD and REORGANIZE and those finished within seconds.

My questions:

  1. What's the big deal about shrink if I can just fix the index fragmentation after the shrink within seconds? I don't understand. Here on DBA stackexchange the "shrink" topic (http://dba.stackexchange.com/tags/shrink/info) says "Pretty much the worst thing you could do to a SQL Server database. In short: It sacrifices performance to gain space." plus refers to another popular article about it. But index fragmentation can be fixed.
  2. Why I didn't experience any log file growth?
  3. What if REBUILD the indx first after the space free-up operation. Can that substitute the first DBCC SHRINKFILE (Envision, NOTRUNCATE) so I just need to DBCC SHRINKFILE (Envision, TRUNCATEONLY)? I have a feeling that the two work on different logical level but I have to ask this.

Bottom line: I promise I won't do shrink regularly or anything. But this is a situation where a cap is hit and shrink is needed.

How do I get let SQL Server 2005 know that I changed host name in Amazon AWS server?

Posted: 30 Jul 2013 12:46 PM PDT

I am using an Amazon AWS SQL Server 2008 R2 server for dev purposes. In order to follow a new naming convention, we changed the name of the host. I then tried to let SQL Server know about this server name change, with the usual:

EXEC sp_dropserver '<oldname>'   GO   EXEC sp_addserver '<newname>', 'local'   GO  

But then SQL Server complains that server oldname does not exist.

When I do this: select @@SERVERNAME

I get back a host name that starts with 'IP-' and is then followed by some hex. Apparently Amazon does some funky DNS aliasing of some sort behind the scenes and comes up with its own internal name, even if I am using oldname and SQL Server itself thinks its oldname.

How do I let SQL Server know that the name of the server is now newname?

thanks aj

ORA-03113: end-of-file on communication channel Can not connect

Posted: 30 Jul 2013 12:24 PM PDT

This is the log file:

Errors in file E:\ORACLEXE\APP\ORACLE\diag\rdbms\xe\xe\trace\xe_arc0_5024.trc:  ORA-00313: open failed for members of log group 3 of thread 1  ORA-00312: online log 3 thread 1: 'D:\ORACLEBACKUP\XE\ONLINELOG\O1_MF_3_8M981VNW_.LOG'  ORA-27041: unable to open file  OSD-04002: unable to open file  O/S-Error: (OS 3) Das System kann den angegebenen Pfad nicht finden.  Errors in file E:\ORACLEXE\APP\ORACLE\diag\rdbms\xe\xe\trace\xe_lgwr_4064.trc:  ORA-00313: open failed for members of log group 1 of thread 1  ORA-00312: online log 1 thread 1: 'D:\ORACLEBACKUP\XE\ONLINELOG\O1_MF_1_8M981W22_.LOG'  ORA-27041: unable to open file  OSD-04002: unable to open file  O/S-Error: (OS 3) Das System kann den angegebenen Pfad nicht finden.  Errors in file E:\ORACLEXE\APP\ORACLE\diag\rdbms\xe\xe\trace\xe_lgwr_4064.trc:  ORA-00313: open failed for members of log group 1 of thread 1  ORA-00312: online log 1 thread 1: 'D:\ORACLEBACKUP\XE\ONLINELOG\O1_MF_1_8M981W22_.LOG'  ORA-27041: unable to open file  OSD-04002: unable to open file  O/S-Error: (OS 3) Das System kann den angegebenen Pfad nicht finden.  Errors in file E:\ORACLEXE\APP\ORACLE\diag\rdbms\xe\xe\trace\xe_ora_664.trc:  ORA-00313: open failed for members of log group 1 of thread   ORA-00312: online log 1 thread 1: 'D:\ORACLEBACKUP\XE\ONLINELOG\O1_MF_1_8M981W22_.LOG'  USER (ospid: 664): terminating the instance due to error 313  System state dump requested by (instance=1, osid=664), summary=[abnormal instance termination].  System State dumped to trace file E:\ORACLEXE\APP\ORACLE\diag\rdbms\xe\xe\trace\xe_diag_3556.trc  Dumping diagnostic data in directory=[cdmp_20130730170815], requested by (instance=1, osid=664), summary=[abnormal instance termination].  Instance terminated by USER, pid = 664  

Has somebody an idea to solve the problem?

I am working on Windows Server. Need any more information?

Which all system parameters to be considered for standard Vacuum process

Posted: 30 Jul 2013 05:25 PM PDT

We want to run standard vacuum process on our production database which is over 100 GB and have millions of dead tuples.

Can anyone suggest what all system parameters we need to keep in mind for setting Cost-based Vacuum settings. I mean like CPU/IO/Memory/Disk.

We cannot run vacuum full as database should be up and running continuously so we just want to attain most appropriate value without affecting system much.

Calculated Measure to get only most current from one dimension on snapshot fact but keep other filters

Posted: 30 Jul 2013 03:11 PM PDT

I'm working on a tabular cube in SSAS 2012 SP1 CU4. I have 3 dimensions (Requisition, Requisition Status, Date) and 1 fact (Requisition Counts). My fact table is at the grain of requisitionKEY, RequisitionStatusKEY, SnapshotDateKey.

I have calculated measures that essentially get the lastnonempty value (like a semi-additive measure) for the given period whether it is Year, Month, Or Date:

Openings:=CALCULATE(Sum('Requisition Counts'[NumberOfOpeningsQT]),   Filter('Date','Date'[DateKey] = Max('Requisition Counts'[SnapshotDateKEY])))    

This works well until you throw Requisition Status into the mix. I have rows for each requisition for every day in 2013. For one of the requisitions, the Requisition Status was Pending for the first 6 months and then it changed to Approved on all data from July 1 to date. When I summarize the number of openings for the requisition at the month level for July, users see two rows: the sum of the openings for the last populated day in July that it was pending and the sum of the openings for the last populated day in July that it was approved.
Pivot Table

Although the total of 2 is correct, I need to change this calculation so that I only get the most current requisition status for the date period selected (Approved) and either show 0 or null or blank for the Pending approval line in the pivot table.

The Requisition Status table looks like this: Requisition Status

Update: Here is a link to a PowerPivot model I made that has some scrubbed data in it to help answer this question. This should better explain the table schemas. The NumberOfOpeningsQT field is basically the number of open positions they have for that job. Sometimes it is 1, sometimes is is more than 1. It doesn't usually change over time, but I guess it could. I'm trying to make the Openings calculation give me the desired answer. I threw some other calculations in there so show some things I had tried that had promise but that I couldn't get to work.

Need to install Oracle Express 11g Release 2 on a Windows 7 64-bit laptop

Posted: 30 Jul 2013 06:45 PM PDT

I need the Oracle 11g Release 2 sample schemas (HR, OE, etc.) in order to do most of the available online tutorials. I was hoping to install Oracle Express Edition on my Windows 7 laptop to get these; but I have never heard of anybody successfully installing Oracle XE on a 64-bit Windows platform.

Is there a version of Oracle XE 11g R2 available for Windows 7? And if so, could you please point me to it?

Thanks...

Help my database isn't performing fast enough! 100M Merge with 6M need < 1 hour!

Posted: 30 Jul 2013 05:45 PM PDT

I have a server right now receiving more raw data files in 1 hour then I can upsert (insert -> merge) in an hour.

I have a table with 100M (rounded up) rows. Table is currently MyISAM. The table has 1000 columns mostly boolean and a few varchar.

Currently the fastest way i've found to get the information into my DB until now was:

Process raw data into CSV files. Load Data In File to rawData Table. Insert rawData table into Table1. (on dupe key do my function) Truncate rawData Repeat. Worked fine until im merging 6M+ Rows into 100M rows and expecting it to take under an hour.

I got 16G of ram so I set my Key_Buffer_Pool to 6G. I have my query cache pool to 16M I have my query cache limit to 10M I would just replace the information however it has to be an Upsert, Update the fields that are true if exists and insert if it does not.

Things im looking into atm; - Possibly switching server table to InnoDB? |-> Not sure about the performance, as the insert into an empty table is fine, its the merge that's slow.

Maybe allowing more table cache? Or even Query Cache? mysql sql mysqli innodb myisam

Merge Code:

b.3_InMarket = (b.3_InMarket OR r.3_InMarket),

To compare my 2 bool columns.

Update

  • Ok I set Raid0
  • Changed my query to Lock Write on tables when inserting
  • When importing csv im disabling keys then re-enabling them before upsert.
  • Changed concurrent_insert to 2

How un-clustered is a CLUSTER USING table

Posted: 30 Jul 2013 12:51 PM PDT

I have some tables which benefit greatly from CLUSTER ON/CLUSTER USING in Postgres SQL:

# CLUSTER table USING index_name;  # ANALYZE VERBOSE table;  # CLUSTER VERBOSE;  

A maintenance task periodically runs CLUSTER VERBOSE to keep things fresh. But is there a test I can run to see how fragmented the table is, prior to running CLUSTER VERBOSE? Maybe something like:

# CLUSTER ANALYZE  table 40000 records. 4000 observed clusters, 5000 potential clusters (20% fragmentation)  

Note that I use CLUSTER so data accessed at the same time is "defragmented" into a small number of disk blocks. For example I have thousands of attributes that go with each page. a CLUSTER page_attribute USING page_id; puts all the attributes next to each other, greatly reducing disk load.

Bitmask Flags with Lookup Tables Clarification

Posted: 30 Jul 2013 07:45 PM PDT

I've received a dataset from an outside source which contains several bitmask fields as varchars. They come in length as low as 3 and as long as 21 values long. I need to be able to run SELECT queries based on these fields using AND or OR logic.

Using a calculated field, where I just convert the bits into an integer value, I can easily find rows that match an AND query, by using a simple WHERE rowvalue = requestvalue, but the OR logic would require using bitwise & in order to find matching records.

Given that I would need to work with several of these columns and select from hundreds of millions of records, I feel that there would be a huge performance hit when doing bitwise & operations to filter my SELECT results.

I came across this answer from searching and it looked like it may fit my needs, but I need some clarification on how it is implemented.

Is this as simple as creating a lookup table that has all possible search conditions?

Example for 3 bits using (a & b) (Edit: Wrong bitwise op)

001,001  001,011  001,101  001,111  010,010  010,011  010,110  011,011  011,111  etc  

The author mentions that it's counter-intuitive initially, but I can't help but feel I'm interpreting the solution incorrectly, as this would give me a single lookup table with likely billions of rows.

Any clarifications on the answer I linked above or other suggestions that would preserve the existing database are appreciated.

Edit: A more concrete example using small data.

Four flags, HasHouse,HasCar,HasCat,HasDog, 0000 is has none, 1111 is has all.

Any number of flags, from all to none, can be flipped, and results must be filtered where selection matches all (Using exact value comparison) or at least 1 (Using bitwise &).

Adding a single calculated column for each bitmask is ok, but adding a column for each bit for more than 100 bits, coupled with how to insert/update the data is why I'm trying to find alternative solutions.

SQL Server 2012 catalog.executions to sysjobhistory - any way to join them?

Posted: 30 Jul 2013 01:45 PM PDT

I have exhausted my resources and can't find a foolproof way to join the ssisdb.catalog tables to the jobs that run them. Trying to write some custom sprocs to monitor my execution times and rows written from the catalog tables, and it would be greatly beneficial to be able to tie them together with the calling job.

SQLite writing a query where you select only rows nearest to the hour

Posted: 30 Jul 2013 04:45 PM PDT

I've got a set of data where data has been taken approximately every minute for about three month and the time has been stored as a unix timestamp. There is no regularity to the timestamp (i.e. the zero minute of the hour may not contain a reading, 00:59:55 and the next measurement could be 01:01:01) and days may be missing.

What I need is the row nearest to the hour, with the timestep rounding to the hour, as long as the nearest value is not more than 30 minutes away from the hour.

Where a matching hour could not be found it would be helpful if the query could include a time but no value.

I realise I'm asking a lot, but this would be incredibly helpful Thanks for taking the time to read this. James

BTW, The table is just PK (autoincrement),timestamp,value, sensor id(FK). I've tried this to get the data out:

SELECT strftime('%S',time, 'unixepoch'),strftime('%M',time, 'unixepoch'),strftime('%H',time, 'unixepoch'), strftime('%d',time, 'unixepoch'), strftime('%m',time, 'unixepoch'), strftime('%Y',time, 'unixepoch'), value from Timestream where idSensor=359;  

Breaking Semisynchronous Replication in MySQL 5.5

Posted: 30 Jul 2013 08:46 PM PDT

I've set up Semisynchronous Replication between two MySQL 5.5 servers running on Windows 7.

My application is running and updating the database of the master server and same is being updated in the slave database server.

But due to some unknown reasons sometimes, Replication breaks.

On running the command:

SHOW STATUS LIKE 'Rpl_semi_sync%';  

It gives this status:

'Rpl_semi_sync_master_no_times', '0'  'Rpl_semi_sync_master_no_tx', '0'  'Rpl_semi_sync_master_status', 'ON'     <<-------------  'Rpl_semi_sync_master_timefunc_failures', '0'  'Rpl_semi_sync_master_tx_avg_wait_time', '338846'  'Rpl_semi_sync_master_tx_wait_time', '29479685'  'Rpl_semi_sync_master_tx_waits', '87'  'Rpl_semi_sync_master_wait_pos_backtraverse', '0'  'Rpl_semi_sync_master_wait_sessions', '0'  'Rpl_semi_sync_master_yes_tx', '3106'  

Ideally, in semi synchronization, when the sync breaks the status should come as OFF since master is not able to receive any acknowledgement from the slave. Please help us in this regard.

Thought about this SQL Server backup plan?

Posted: 30 Jul 2013 01:01 PM PDT

I just started a new job, and I'm reviewing the database maintenance plan. I've got quite a bit of experience writing SQL, but not much experience with DB administration. My last job was at a large company, and they didn't let regular people touch that sort of stuff.

We are locked into SQL Server 2000 (it's embedded in some quite old software and we can't upgrade yet). The current maintenance plan (Full Recovery model) does the following:

Every hour from 6am - 11pm:

  1. backup log Accounting to Accounting_Logs with noinit

Every night at 1am, this happens:

  1. backup Log Accounting WITH TRUNCATE_ONLY
  2. DBCC SHRINKDATABASE (Accounting, TRUNCATEONLY)
  3. backup database Accounting_ReadOnly to Accounting with init

Then at 3am:

  1. all the indexes are rebuilt

Is this a decent plan? Will this give us good backups that are easy to recover? I know I'm asking for a lot, but any thought/comments/suggestions would be appreciated.

Please let me know if you need more information. Thanks!

Can I use a foreign key index as a shortcut to getting a row count in an INNODB table?

Posted: 30 Jul 2013 12:33 PM PDT

I have a table that has a large number of rows in it.

The primary key (an auto-incrementing integer) is, by default, indexed.

While waiting for a row count to be returned I did an EXPLAIN in another window and the the results were as follows:

mysql> SELECT COUNT(1) FROM `gauge_data`;  +----------+  | COUNT(1) |  +----------+  | 25453476 |  +----------+  1 row in set (2 min 36.20 sec)      mysql> EXPLAIN SELECT COUNT(1) FROM `gauge_data`;  +----+-------------+------------+-------+---------------+-----------------+---------+------+----------+-------------+  | id | select_type | table      | type  | possible_keys | key             | key_len | ref  | rows     | Extra       |  +----+-------------+------------+-------+---------------+-----------------+---------+------+----------+-------------+  |  1 | SIMPLE      | gauge_data | index | NULL          | gauge_data_FI_1 | 5       | NULL | 24596487 | Using index |  +----+-------------+------------+-------+---------------+-----------------+---------+------+----------+-------------+  1 row in set (0.13 sec)  

Since the primary key is guaranteed to be unique, can I just take the number of rows from the EXPLAIN and use that as the row count of the table?

BTW, I believe the difference in numbers is due to the fact that more data is continually being added to this table.

Download SQL Server profiler for SQL Server Management Studio

Posted: 30 Jul 2013 01:02 PM PDT

How can I profile a SQL Server 2008 database to see code that's being executed on a particular database? I remember using the SQL Server profiler, but I don't see it in SQL Server Management Studio after downloading SQL Server 2008 R2 Express. Where can I download that tool and install it? Do I need the full version of SQL Server 2008 in order to see this option?

Slow queries on SQL Server [closed]

Posted: 30 Jul 2013 12:40 PM PDT

We have SQL Server 2005. Our main table is the archive table which has nearly 200 million rows in it. There are 2000 clients that connect a service so the service writes the information to the archive. We have also another service which gets the clients information from archive as batches and calculate some another information for each row and rewrite them as batches again.

On the webhand-side we have 100-200 users online at a time and most of the queries depends on archive table. I built all possible indexes on archive and I'm using .NET Framework 3.5. I am connecting the database with standard connection string.

The problem is when a user request for one day long report it returns in 10-15 seconds for 50 rows. The one month long reports take more time like 2-3 min for 5k-6k rows. I am not a DBA but we don't have one so i am expected to tackle this problem. Can you make any suggestions for my problem?

Thanks.

What is the easiest way to get started using databases with real data?

Posted: 30 Jul 2013 01:22 PM PDT

I have a project that could benefit from using a database, but I have no experience with databases, don't have access to a server, and have relatively little experience working with things living server-side.

If I'm going to have to tackle a learning curve, I'd prefer to learn something with broad applicability (such as SQL) but would settle for learning something like Access if it is sufficiently powerful for the task I'm currently trying to tackle. Of course, I'd also rather not drop $150 on Access if it can be helped since I'm just tinkering.

I've downloaded LibreOffice Base as well as something called SQLiteBrowser, but I wanted to check first before I invest time learning those particular applications and their flavors of SQL whether those tools will be sufficient for what I want to do.

I want to be able to:

  • import data from a CSV or from Excel
  • run queries that equate to "select x where this is that and this contains that and any of these contain that"
  • write(?) a new field which indicates those results which match a given query

Again, I'm willing to learn, but it would be nice not to have to learn a bunch of intermediate stuff about IT before I can focus on learning databases and, if necessary, the particulars of a given application.

How to retrieve the definition behind statistics added to tables

Posted: 30 Jul 2013 12:38 PM PDT

Is there a way to programmatically retrieve the definition of each STATISTICS added to table columns and indexes. For both user added and system created indexes. There are many STATISTICS like '__WA_Sys_*' that are added by Sql Server.

I need to re-write some of them and add more, but there are too many to do them manually with Management Studio.

Get failed SQL Server agent job?

Posted: 30 Jul 2013 01:38 PM PDT

How to get a list of failed job run last night? I only find the following powershell script. What's the SQL equivalence?

dir $psPath\Jobs | % { $_.EnumHistory() } | ? { $_.RunStatus -ne 1 }  

Where to start learning to understand SQL Server architecture and internals?

Posted: 30 Jul 2013 01:01 PM PDT

I have a basic knowledge of T-SQL and SQL Server components. My goal is to master my skills and learn everything about SQL Server to eventually become DBA in the future. I would like to understand deep SQL Server internals, how exactly everything works, when and why. Could you please suggest me a good place to start? IMHO it's just not possible by doing the programming work.

Upgrading Instances with Mirroring

Posted: 30 Jul 2013 12:24 PM PDT

If you want to upgrade or install a patch on two separate instances that house both the principal and mirrored database, how can you go about that? If the database that is being mirrored needs to be available 24/7 and you don't have a window to go offline, what is the best means to do this?

EDIT: this is with SQL Server.

Are heaps considered an index structure or are they strictly a table structure without index?

Posted: 30 Jul 2013 12:32 PM PDT

Inspired by this post: https://twitter.com/#!/SQLChicken/status/102930436795285505

Heaps: Are they considered an index structure or are they strictly a table structure without index?

The smallest backup possible ... with SQL Server

Posted: 30 Jul 2013 12:35 PM PDT

Daily we ship our SQL Server backups across the WAN. We need to minimize the size of these backups so it does not take forever.

We don't mind if our backup process takes a bit longer; as it stands we need to move 30gigs of compressed backup across the WAN that takes over 10 hours.

There are 2 options we have to get smaller daily backups.

  1. Log shipping, which would mean we would have to restructure DR process.
  2. Strip information out of the db and rebuild on the other side (drop non clustered indexes, pack clustered indexes at 100% - rebuild on the other side)

Both would involve a fair amount of work from our part. We are using SQL Server 2008 pro, all backups are compressed.

Are there any commercial products that can give us similar backup size to option (2)?

Is there a comprehensive script out there that will allow us to accomplish (2)? (handling indexed views, filtered indexes, foreign keys and so on)

Location of Maintenance Plan's Back Up Database Tasks information (SQL Server 2005)

Posted: 30 Jul 2013 05:52 PM PDT

I would like to know where in the database or on the file system the information about Back Up Database Task in the Maintenance Plans Tasks.

I can find the Job in msdb.dbo.sysjobs I can find the Subplan in msdb.dbo.sysmaintplan_subplans

But I need to find where and how the Task is being stored.

Any help will be greatly appreciated.

No comments:

Post a Comment

Search This Blog