Wednesday, July 3, 2013

[SQL Server] SQL to combine rows based on dates and other common factors

[SQL Server] SQL to combine rows based on dates and other common factors


SQL to combine rows based on dates and other common factors

Posted: 03 Jul 2013 04:40 AM PDT

Hi All,We have a database for employees, jobs and work allocation. I am trying to find an efficient way to combine rows based on date and work load but excluding vacation allocations.Here is an example of the current data set:[b]ID Employee Job StartDate EndDate Workload[/b]1 John Doe HSBC 01/01/2013 31/12/2013 1002 John Doe Vacation 17/06/2013 21/06/2013 1003 John Doe HSBC 01/01/2014 31/12/2014 1004 John Doe Vacation 19/08/2013 23/08/2013 1005 John Doe Barclays 01/01/2014 31/01/2014 506 John Doe Barclays 01/01/2015 31/01/2015 507 John Doe Santander 06/01/2014 25/01/2014 508 John Doe Santander 05/03/2014 17/07/2014 809 John Doe Santander 05/03/2014 17/07/2014 5010 John Doe Vacation 13/01/2014 17/01/2014 100The final result should be as follows (bold indicates the changed rows excluding header). Note that the IDs are not important at this point and also the work allocation to the job 'Santander' are separate rows as one is at 50% workload and the other is 80. The rows should only be combined if the employee, job and workload is the same. 'Vacations' are not combined at all.[b]ID Employee Job StartDate EndDate Workload[/b][b]1 John Doe HSBC 01/01/2013 31/12/2014 100[/b]2 John Doe Vacation 17/06/2013 21/06/2013 1004 John Doe Vacation 19/08/2013 23/08/2013 100[b]5 John Doe Barclays 01/01/2014 31/01/2015 50[/b][b]7 John Doe Santander 06/01/2014 17/07/2014 50[/b]8 John Doe Santander 05/03/2014 17/07/2014 8010 John Doe Vacation 13/01/2014 17/01/2014 100Here is the DDL which another forumite kindly provided for an earlier query. Any suggestions would be greatly appreciated.[code="sql"]set dateformat dmyif OBJECT_ID('tempdb..#Something') is not null drop table #Something create table #Something( ID int, Employee varchar(20), Job varchar(20), StartDate datetime, EndDate datetime, Workload int)insert #Somethingselect *from (Values(1, 'John Doe', 'HSBC', '01/01/2013', '31/12/2013', 100),(2, 'John Doe', 'Vacation', '17/06/2013', '21/06/2013', 100),(3, 'John Doe', 'HSBC', '01/01/2014', '31/12/2014', 100),(4, 'John Doe', 'Vacation', '19/08/2013', '23/08/2013', 100),(5, 'John Doe', 'Barclays', '01/01/2014', '31/01/2014', 50),(6, 'John Doe', 'Barclays', '01/01/2015', '31/01/2015', 50),(7, 'John Doe', 'Santander', '06/01/2014', '25/01/2014', 50),(8, 'John Doe', 'Santander', '05/03/2014', '17/07/2014', 80),(9, 'John Doe', 'Santander', '05/03/2014', '17/07/2014', 50),(10, 'John Doe', 'Vacation', '13/01/2014', '17/01/2014', 100)) x(a,b,c,d,e,f)select * from #Something[/code]

ODBC Connection Fail

Posted: 03 Jul 2013 07:22 AM PDT

Hello,I am having a problem with an Access application that we use to show and print checks. I believe the application was created with Access 2003 connecting to SQL Server 2005. It ran ok, but not always perfectly, on my old machine (Windows XP) using Access 2007. When we recently upgraded to SQL Server 2008, it continued to run ok. Now we've upgraded from XP to 2007 and I get an 'ODBC -- connection to 'SQL Native ClientDbaseName' failed'.I can create a new access database and link to the tables that I want to, so I believe that means I have the correct drivers? But maybe the checks database is still connecting with an old string? Although, I can update the tables successfully and still get the error.I am hoping to not have to rewrite the Access checks program, but if I do - do I use a file source as my .dsn instead of a machine source? It seems like we've had problems getting this program onto other machines, and I'm thinking it might be because it was created with a machine source?Any articles on recreating ODBC connections when moving to a new system?Thanks!Mary

Difference between Cluster and Index

Posted: 03 Jul 2013 03:53 AM PDT

Hi, I am really confused about difference between `Cluster` and `Index`.I think they are same.Please help me to find a correct answer for my mistakes.1. What is Cluster?I think cluster create indexes on a column, so you can reach row faster.2. What is Index?I think indexing on a column helps to reach row faster. (So what is difference between this and cluster?)3.[code="sql"]CREATE CLUSTERED INDEX myIndexON tableName (Column1, Column2, Column3)[/code]What the...? :blink:Does it mean if two rows have same value for Column1 then lower value for Index choose by Column2?4.[code="sql"]CREATE NONCLUSTERED INDEX myIndexON tableName (columnName)[/code]What? :blink:Nonclustred index? How is it possible to an Index be nonclustred?Im really confused, thank you for help.

SQL to split row by date (split into multiple rows)

Posted: 11 Jun 2013 08:42 PM PDT

I am looking for help with splitting a row into multiple rows based on dates overlapping.As an example, I have a table with the following data:[b]Row ID, Employee, Job, Start Date, End Date, Workload[/b]1, John Doe, HSBC, 01/01/2013, 31/12/2013, 1002, John Doe, Vacation, 17/06/2013, 21/06/2013, 1003, John Doe, Vacation, 19/08/2013, 23/08/2013, 1004, John Doe, Barclays, 01/01/2014, 31/01/2014, 505, John Doe, Santander, 06/01/2014, 25/01/2014, 506, John Doe, Vacation, 13/01/2014, 17/01/2014, 100I am looking to split the banking rows where they overlap with a vacation. So for example, the final result should be:[b]Row ID, Employee, Job, Start Date, End Date, Workload[/b]1, John Doe, HSBC, 01/01/2013, 16/06/2013, 1002, John Doe, Vacation, 17/06/2013, 21/06/2013, 100[b]3, John Doe, HSBC, 22/06/2013, 18/08/2013, 100[/b]4, John Doe, Vacation, 19/08/2013, 23/08/2013, 100[b]5, John Doe, HSBC, 24/08/2013, 31/12/2013, 100[/b]6, John Doe, Barclays, 01/01/2014, 12/01/2014, 50[b]7, John Doe, Barclays, 18/01/2014, 31/01/2014, 50[/b]8, John Doe, Santander, 06/01/2014, 12/01/2014, 50[b]9, John Doe, Santander, 18/01/2014, 25/01/2014, 50[/b]10, John Doe, Vacation, 13/01/2014, 17/01/2014, 100New rows after split are in bold. The Row ID should be unique although it doesn't need to be sequential. Any help or guidance would be appreciated.

Writing script

Posted: 03 Jul 2013 02:00 AM PDT

I need to add a script so that my system will pull a word document using a program I have. Can I write the script in notepad and save it as "xxx.sql" and then just save it the the fold that houses the doc? I already have the script written out.

DTS To SSIS Conversion

Posted: 03 Jul 2013 01:21 AM PDT

I am tying to convert a n DTS package to SSIS. I have used pragmatic works to convert so far but it does not convert Visual Basic.The code looks like[code="vb"]Function Main() If Left(DTSSource("Col001"),6) = "STOCK" Or Left(DTSSource("Col001"), 6) = "PROJCT" Then DTSDestination("Column_Out") = Mid(DTSSource("Col001", 7, 25) DTSDestination("Column_Out_2") = Mid(DTSSource("Col001", 60, 25) Main = DTSTransformationStat_OK Else Main = DTSTransformationStat_SkipRow End ifEnd Function[/code]With a little bit of research I was able to do this[code="vb"]Public Overrides Sub NightlyInput_ProcessInputRow(ByVal Row As NightlyInputBuffer) Dim strRow As String strRow = Row.toString If (Left(strRow, 6) = "STOCK") Or (Left(strRow, 6) = "PROJCT") Then NightlyOutBuffer.ColumnOut = Mid(strRow, 7, 25) NightlyOutBuffer.ColumnOut2 = Mid(strRow, 60, 25) end ifEnd Sub[/code]I don't seem to be getting any output from the code.Thanks for you help.

a query like factorial

Posted: 18 Mar 2013 05:00 AM PDT

Hi.I ask you for help because its more than 2 days im thinking about this question.Imagine we have a table like this:ID | Value---------1 | 192 | 903 | 204 | 85 | 9I want my query make this output:ID | Value---------1 | 192 | 19 + 903 | 19 + 90 +204 | 19 + 90 +20 + 85 | 19 + 90 +20 + 8 + 9I already tried so many queries, the last query I wrote are this but it dont make correct output:[code="sql"] WITH CTE (rowNum,Value,Level) AS( SELECT ROW_NUMBER() OVER (ORDER BY ID DESC) as rowNum, Value 0 as Level FROM Mytable WHERE ID = (SELECT MIN(ID) FROM Mytable) UNION ALL SELECT ROW_NUMBER() OVER (ORDER BY ID DESC) as rowNum, Value Level + 1 FROM Mytable a INNER JOIN CTE b ON b.rowNum <= a.rowNum + 1 --a.rowNum are not allowed here )SELECT rowNum, SUM(Value)FROM CTEGROUP BY LEVEL, rowNum[/code]Thank you for help

[how to] Adjacency list model vs nested list model or any other database models

[how to] Adjacency list model vs nested list model or any other database models


Adjacency list model vs nested list model or any other database models

Posted: 03 Jul 2013 08:51 PM PDT

I have the following hierarchy

Terms -> Sub-Terms -> Keywords

1) There will be many terms and 2) Each term have many sub-terms and 3) Each sub-term have many keywords

A post can be attached to a Term, or a sub-Term(Which is under a Term) or a keyword(which is under a sub-Term)

I am in a dilemma whether to choose adjacency list model or nested list model for this categorization...Though this question has been answered many times in online forums, i am not able to finalize the model...Some says nested list model is overkill for fixed number of levels...

Someone please guide me in the right path...

thank you

MySQL Replicate Databases without Tables or Content

Posted: 03 Jul 2013 07:48 PM PDT

I would like to replicate all the databases from a MySQL master to a slave used for archiving data older than 30 days. It isn't common for this data to be accessed so I would like it removed from the master. I only need to replicate the database table structure without any content. But only two tables (tablea and tableb) out of the 10 tables need to be created in the database structure.

The table content will be imported using an archive script that is using pt-archiver which is why I don't need the content replicated to the slave. However when the main database is removed from the master it should also remove the database from this archive slave. Is this possible using replication?

UNF to 3NF is this relation workable?

Posted: 03 Jul 2013 04:26 PM PDT

I've inherited a flat table of data consisting of around 2 million records. Looking at the data its already in 1NF, i'd like to get this into 3NF.

The data is Australian so suburb can't determine state as suburb names exist in more than one state, plus one postcode often covers more than one suburb. Meaning suburb, state, postcode should be a super key?

There are multiple companies at different sites, some have the same phone number (freephone 1300 or 1800) and others have distinct phone numbers, every record has a fax number and most have a phone number. ContactName is mostly blank so phone and fax numbers relate to companies not contacts.

Selection of records by users is typically RLIKE on the field BusinessTypeDescription coupled with geographic criteria which of course means a full table scan on every query and therefore super slow results.

Is the proposed table structure below in 3NF and is it practical?

EXISTING TABLE  -----------------------  ID (PK)  CompanyName  Address  Suburb  State  Region  Postcode  Country  ContactName (this field is mostly empty)  Phone  Fax  Mobile  Email  (this field is mostly empty)  Website  (this field is mostly empty)  PremiseType  BusinessTypeDescription  ANZSICCode  ANZSICDescription  RecordSource  

This is the proposed structure...

COMPANY  ---------------  ID (PK)  CompanyName  Address  LocationID (FK)  PhoneID (FK)  FaxID (FK)  MobileID (FK)  ContactName  PremiseType  BusinessTypeID (FK)  ANZSICCode (FK)  Email  Website  RecordSource      LOCATION        COUNTRY         REGION  --------------  --------------  ------------  ID (PK)         ID (PK)         ID (PK)  Suburb          Country         Region  State  Postcode  RegionID (FK)  CountryID (FK)    PHONES  ------------  ID  Phone    MOBILES  ------------  ID  Mobile    FAXES  ------------  ID  Fax    BUSINESSTYPE  ----------------------  ID  BusinessTypeDescription      ANZSICS  ----------------------  ANZSICCode  ANZSICDescription  

What menu item in TOAD can be used to monitor SQL queries in real time in a database?

Posted: 03 Jul 2013 04:36 PM PDT

What menu item in TOAD can be used to monitor SQL queries in real time in a database? I own a license for TOAD and I have used the "live SQL monitoring" feature before but I don't quite remember which menu item to get to it. Does anyone know? This is for monitoring a Oracle database.

I know about Menu-->Database-->Monitor-->SQL Tracker, which can bind to a local process client and trace SQL queries but I am looking to trace all queries from all clients without needing to bind to a process like the "SQL Tracker" tool does.

I have seen this work on my computer but I just can't remember how to find it. I have scoured all the menu items in my TOAD app and haven't been able to relocate it.

ALLOW_SNAPSHOT_ISOLATION Duration

Posted: 03 Jul 2013 07:26 PM PDT

I would like to enable ALLOW_SNAPSHOT_ISOLATION on a production database that has ~400 million total records. I understand that 14 bytes will need to be added to each record.

If I set ALLOW_SNAPSHOT_ISOLATION will it block for a period of time proportional to the record count, or will the data be updated asynchronously?

Primarily, I want to be sure that my database will not be out of service for hours when this setting is enabled.

Active Directory group permissions not being applied to user

Posted: 03 Jul 2013 05:27 PM PDT

I have an odd situation. An Active Directory group was recently created for the sole purpose of being a securable in SQL Server. I created a login based on the AD group, a user based on the login, and granted permissions to the user. All pretty standard stuff. When I impersonate one of the group members (using execute as login='domain\username'), I'm able to perform the granted action. However, when the user himself tries it, he gets "permission denied". I'm at a loss to explain the behavior and subsequently fix it.

ADO SQL modifies data on other tables?

Posted: 03 Jul 2013 02:33 PM PDT

i´ve been trying to update some record using a form that has a couple of comboboxes in it and those are populated from auxiliary tables that contains city and department (the table that needs to be updated only holds the ID from both "side" tables.

Since i was having some hard times to insert the data directly i decided to use ADO for direct SQL Insert.

So far my VBA code looks like this:

Private Sub btnClose_Click()      Dim Msg, Style, Title, Ctxt, Help, Response, MyString        Msg = "Want to save changes?"      Style = vbYesNo + vbQuestion      Tytle = "Confirm changes"      Ctxt = 1000        Response = MsgBox(Msg, Style, Tytle, Help, Ctxt)        If Response = vbNo Then          Cancel = True      Else          Call ManualUpdate      End If  End Sub  

And the ManualUpdate sub:

Private Sub ManualUpdate()      Dim ccnDb As ADODB.Connection      Set ccnDb = CurrentProject.Connection      Dim cmd As ADODB.Command      Set cmd = New ADODB.Command        Dim strSQL As String      strSQL = "UPDATE personal SET personaApPaterno = " & "'" & Trim(Me.personaApPaterno.Value) & "'"      strSQL = strSQL & ", personaApMaterno = " & "'" & Trim(Me.personaApMaterno.Value) & "'"      strSQL = strSQL & ", personaNombre = " & "'" & Trim(Me.personaNombre.Value) & "'"      strSQL = strSQL & ", personaCargo = " & "'" & Trim(Me.personaCargo.Value) & "'"      strSQL = strSQL & ", departamentoId = " & Me.cmbDepto.Value      strSQL = strSQL & ", ciudadId = " & Me.cmbCiudad.Value      strSQL = strSQL & ", personaProfesion = " & "'" & Trim(Me.personaProfesion.Value) & "'"      strSQL = strSQL & ", personaGerente = " & Me.personaGerente.Value      strSQL = strSQL & ", personaExterno = " & Me.personaExterno.Value      strSQL = strSQL & ", personaSexo = " & Me.ogSexo.Value      strSQL = strSQL & " WHERE personaRUT = " & Me.personaRUT.Value        If Me.Dirty Then          Me.Dirty = False      End If        With cmd          .CommandText = strSQL          .ActiveConnection = ccnDb          .Execute      End With        Set cmd = Nothing      DoCmd.Close  End Sub  

So far... so good...

As you can see the SQL does a DIRECT upgrade to the table called "personal", and is working fine, the ONLY thing that is driving me crazy is that, for some odd reason, Access 2007 is updating the auxiliary tables and replacing the text at the "ciudadName" and "departamentoName" fields with their own respective Id´s. Just for some extra information, i was using DAO before ADO.

Here is the DAO code also...

Private Sub ManualUpdate()      Dim dbDao As DAO.Database      Dim rsDao As DAO.Recordset        If Me.Dirty Then          Me.Dirty = False      End If        Set dbDao = CurrentDb      Set rsDao = dbDao.OpenRecordset("personal", dbOpenTable)        rsDao.FindFirst ("personaRUT = " & Me.personaRUT.Value)      rsDao.Edit        rsDao![personaApPaterno] = Trim(Me.personaApPaterno.Value)      rsDao![personaApMaterno] = Trim(Me.personaApMaterno.Value)      rsDao![personaNombre] = Trim(Me.personaNombre.Value)      rsDao![personaCargo] = Me.personaCargo.Value      rsDao![departamentoId] = Me.cmbDepto.Value      rsDao![comunaId] = Me.cmbComuna.Value      rsDao![personaProfesion] = Me.personaProfesion.Value      rsDao![personaGerente] = Me.personaGerente.Value      rsDao![personaExterno] = Me.personaExterno.Value      rsDao![personaSexo] = Me.ogSexo.Value        rsDao.Update        rsDao.Close      dbDao.Close        Set dbDao = Nothing      Set rsDao = Nothing      DoCmd.Close  End Sub  

BTW: DAO was doin THE same "update" where it was not supposed to be happening, that why i went for the ADO way, but with no luck at all.

Information about Disk Storage MySQL

Posted: 03 Jul 2013 02:31 PM PDT

I am trying to create an application for monitoring MySQL,in my application there's a part of monitor DISK STORAGE, so after more research I have not found how to calculate :

  • used and free size of tablespace,
  • used and free of data files,
  • used and free of log files

Anyone know how to calculate their sizes or someone can give me articles or books can help me. And Thanks.

How can I better optimize my tables and a LEFT JOIN query to find items that don't yet exist in the right table?

Posted: 03 Jul 2013 02:15 PM PDT

I have two tables that represent a list of urls and their related word indexes. Here are the table definitions for reference.

desc urllist;  +-------+---------------------+------+-----+---------+----------------+  | Field | Type                | Null | Key | Default | Extra          |  +-------+---------------------+------+-----+---------+----------------+  | id    | bigint(20) unsigned | NO   | PRI | NULL    | auto_increment |  | url   | text                | NO   |     | NULL    |                |  +-------+---------------------+------+-----+---------+----------------+  

and

desc wordlocation;  +----------+---------------------+------+-----+---------+-------+  | Field    | Type                | Null | Key | Default | Extra |  +----------+---------------------+------+-----+---------+-------+  | urlid    | bigint(20) unsigned | NO   |     | NULL    |       |  | wordid   | bigint(20) unsigned | NO   |     | NULL    |       |  | location | int(10) unsigned    | NO   |     | NULL    |       |  +----------+---------------------+------+-----+---------+-------+  

The software app is a web spider. It crawls a list of urls, extracts those urls, and inserts these into the urllist table. Then, an indexer checks to see what urls have not yet been indexed, and then proceeds to index said urls.

Here is the query I am using to find items in the left table (urllist) that have not yet been indexed in the right table (wordlocation). This query is as suggested on the mysql.com website:

select * from urllist ul   left join wordlocation wl on ul.id = wl.urlid   where wl.urlid IS NULL;  

As of this writing my test database has only 600 indexed urls, and the wordlocation table has 1.3 million rows. However, my CPU is at 100%, and the longest I've waited to see if the query would complete is a half hour (which, it never did by the way).

To be thorough, here is the explanation of the query:

explain select * from urllist ul left join wordlocation wl on ul.id = wl.urlid where wl.urlid IS NULL;  +----+-------------+-------+------+---------------+------+---------+------+---------+-------------------------+  | id | select_type | table | type | possible_keys | key  | key_len | ref  | rows    | Extra                   |  +----+-------------+-------+------+---------------+------+---------+------+---------+-------------------------+  |  1 | SIMPLE      | ul    | ALL  | NULL          | NULL | NULL    | NULL |   50364 |                         |  |  1 | SIMPLE      | wl    | ALL  | NULL          | NULL | NULL    | NULL | 1351371 | Using where; Not exists |  +----+-------------+-------+------+---------------+------+---------+------+---------+-------------------------+  

I need this query to finish in seconds, not minutes. Also, I'm worried about scalability. I have 40,000 unique urls that are waiting to be added to the index, so how do I take that into consideration with my table and query design? 400,000 urls?

Just a couple notes about my decisions on the current table structure.

I have no intention of stopping at 400,000 urls, but perhaps bigint(20) is a bit overzealous?

Url as text is for more practical reasons. I index a lot of Asian and other foreign language domains which do not appear as their equivalent Kanji or other characters in the database and frequently take more than 255 characters.

I'm using MySQL. I'm definitely open to suggestions as to better table and query design. Please let me know if I can provide more information.

MySQL on Unix, should administrator accounts join the operating system "mysql" group?

Posted: 03 Jul 2013 01:22 PM PDT

When I install MySQL via Yum, it automatically creates the mysql operating system user and group. Then mysqld executes as mysql (instead of as root), and all the files in the data directory are owned by mysql:mysql.

This is obviously correct security practice, I understand why this happens and don't want to subvert it.

When I'm administering a MySQL system, I've just gotten in the habit of using sudo when I need to interact with a protected file (e.g., reading a binary log with mysqlbinlog, creating a backup tarball, etc).

Would it also work to add my OS user to the mysql group? Are there major downsides of this I'm not considering?

Transferring Data from OLTP to OLAP databases

Posted: 03 Jul 2013 11:25 AM PDT

Systems that have both OLTP and OLAP databases periodically move data from the OLTP database to the OLAP database, so that reports can be generated based on the OLAP database.

What's the official term for this process and is there an overview of this process online?

TOAST Table Growth Out of Control - FULLVAC Does Nothing

Posted: 03 Jul 2013 08:49 PM PDT

Recently, I've had a PostgreSQL 8.2.11 server upgraded to 8.4 in order to take advantage of autovacuum features. This server exists in a very closed environment (isolated network, limited root privileges; this explains the older software in use) and runs on RHEL5.5 (i686). After the upgrade, the database has constantly been growing to the tune of 5-6 GB a day. Normally, the database, as a whole, is ~20GB; currently, it is ~89GB. We have a couple other servers which run equivalent databases and actually synchronize the records to each other via a 3rd party application (one I do not have access to the inner workings). The other databases are ~20GB as they should be.

Running the following SQL, it's fairly obvious there's an issue with a particular table, and, more specifically, its TOAST table.

SELECT nspname || '.' || relname AS "relation",      pg_size_pretty(pg_relation_size(C.oid)) AS "size"    FROM pg_class C    LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)    WHERE nspname NOT IN ('pg_catalog', 'information_schema')    ORDER BY pg_relation_size(C.oid) DESC    LIMIT 20;  

Which produces:

                relation              |  size    ------------------------------------+---------      pg_toast.pg_toast_16874           | 89 GB      fews00.warmstates                 | 1095 MB      ...    (20 rows)  

This TOAST table is for a table called "timeseries" which saves large records of blobbed data. A SUM(LENGTH(blob)/1024./1024.) of all the records in timeseries yields ~16GB for that column. There should be no reason this table's TOAST table should be as large as it is.

I've performed a VACUUM FULL VERBOSE ANALYZE timeseries, and the vacuum runs to completion with no errors.

INFO: vacuuming "pg_toast.pg_toast_16874"
INFO: "pg_toast_16874": found 22483 removable, 10475318 nonremovable row versions in 10448587 pages
DETAIL: 0 dead row versions cannot be removed yet.
Nonremovable row versions range from 37 to 2036 bytes long.
There were 20121422 unused item pointers.
Total free space (including removable row versions) is 0 bytes. 4944885 pages are or will become empty, including 0 at the end of the table. 4944885 pages containing 0 free bytes are potential move destinations.
CPU 75.31s/29.59u sec elapsed 877.79 sec.
INFO: index "pg_toast_16874_index" now contains 10475318 row versions in 179931 pages
DETAIL: 23884 index row versions were removed.
101623 index pages have been deleted, 101623 are currently reusable.
CPU 1.35s/2.46u sec elapsed 21.07 sec.

REINDEXed the table which freed some space (~1GB). I can't CLUSTER the table as there isn't enough space on disk for the process, and I'm waiting to rebuild the table entirely as I'd like to find out why it is so much bigger than equivalent databases we have.

Ran a query from the PostgreSQL wiki here - "Show Database Bloat", and this is what I get:

  current_database | schemaname |           tablename            | tbloat | wastedbytes |              iname              | ibloat | wastedibytes    -----------------+------------+--------------------------------+--------+-------------+---------------------------------+--------+--------------    ptrdb04          | fews00     | timeseries                     |    1.0 |           0 | idx_timeseries_synchlevel       |    0.0 |            0    ptrdb04          | fews00     | timeseries                     |    1.0 |           0 | idx_timeseries_localavail       |    0.0 |            0    ptrdb04          | fews00     | timeseries                     |    1.0 |           0 | idx_timeseries_expirytime       |    0.0 |            0    ptrdb04          | fews00     | timeseries                     |    1.0 |           0 | idx_timeseries_expiry_null      |    0.0 |            0    ptrdb04          | fews00     | timeseries                     |    1.0 |           0 | uniq_localintid                 |    0.0 |            0    ptrdb04          | fews00     | timeseries                     |    1.0 |           0 | pk_timeseries                   |    0.1 |            0    ptrdb04          | fews00     | idx_timeseries_expiry_null     |    0.6 |           0 | ?                               |    0.0 |            0  

It looks like the database doesn't consider this space as "empty," at all, but I just don't see where all the disk space is coming from!

I suspect that this database server is deciding to use 4-5x as much disk space to save the same records pulled from the other data servers. My question is this: Is there a way I can verify the physical disk size of a row? I'd like to compare the size of one row on this database to another "healthy" database.

Thanks for any help you can provide!

EDIT

I ended up rebuilding the table from a dumped schema due to its size (couldn't leave it alone for another day). After synchronizing the data, via the software synch process, the TOAST table was ~35GB; however, I could only account for ~9GB of it from that blob column which should be the longest in terms of values. Not sure where the other 26GB is coming from. CLUSTERed, VACUUM FULLed, and REINDEXed to no avail. The postgresql.conf files between the local and remote data servers are exactly the same. Is there any reason this database might be trying to store each record with a larger space on disk?

What are Measures and Dimensions in Cubes

Posted: 03 Jul 2013 07:30 PM PDT

I'm very new to Microsoft Sql Server Business Intelligence and Analysis Service(but I'm programming for years with SQL Server).Can any one describe Measures and Dimensions in Cubes in Simple words(If it's possible with images)?

thanks

How does one select the histogram / scatterplot / etc. Data Viewers in SSIS 2012

Posted: 03 Jul 2013 11:27 AM PDT

I have a data flow task in SSIS 2012 and have created a flat file source and an OLE DB connection destination. I can select the data flow path between them as in earlier versions of SSIS and I can right click and select the Data Flow Path Editor.

The Data Viewer option is available as in past versions, but there is no add button and I cannot determine how to add histogram/ scatterplot /etc. data viewer.

All I can seem to configure is a sample of the data that actually moved across.

On which machine should TCP Chimney Offload be disabled?

Posted: 03 Jul 2013 03:07 PM PDT

I am advising someone on how to disable TCP Chimney Offload, which is enabled by default by Windows Server 2003 SP2. [1] We are working in an environment with multiple server machines, namely the application server machine which communicates with the database server machine.

On which machine should TCP Chimney Offload be disabled? Application, database or both?

[1] This can cause communication problems between application and database. See http://support.microsoft.com/kb/942861 and http://blogs.msdn.com/b/psssql/archive/2008/10/01/windows-scalable-networking-pack-possible-performance-and-concurrency-impacts-to-sql-server-workloads.aspx

MySQL replication: 'Duplicated entry for PRIMARY key'

Posted: 03 Jul 2013 12:30 PM PDT

Could you please help me to understand why I'm receiving 'Duplicated entry for PRIMARY key' on a slave server after a full re-sync.

Basically 'mysqldump' was running almost whole night and then the restore process took a couple of hours so when I've started the slave it was ~63874 seconds behind master.

The slave server is read only (read_only) and there was no any writes during re-sync process so I don't understand why there are duplicated keys.

Binary log format is set to MIXED on master.

Command used to backup DB:

mysqldump --opt --single-transaction -Q --master-data=2 db | bzip2 -cz > db.sql.bz2  

The slave is replicating only one database from master (db -> db_backup) with the following options:

replicate-wild-do-table = db_backup.%  replicate-rewrite-db = db->db_backup  

"SQL1042C An unexpected system error" when creating a database

Posted: 03 Jul 2013 01:22 PM PDT

I installed DB2 Express-C 10.1 on OS X 10.8.4. I installed it in user mode, and it seems to have created an instance with my username (avernet). I checked the instance is indeed there by running db2ilist. The database manager is running, as when I run db2start, I get the message SQL1026N The database manager is already active.

Now, I am trying to create a database by running db2 from the command line, and typing the command create database orbeon, but it returns:

QL1042C  An unexpected system error occurred.  SQLSTATE=58004  

What am I missing here? How can I create a simple database from the command line (I prefer not to bother with Data Studio as it isn't available for OS X)?

In db2dump/db2diag.log, I see the follwing error:

2013-06-29-15.31.38.948217-420 E159047E344          LEVEL: Error (OS)  PID     : 1766                 TID : 140735175762304PROC : db2star2  INSTANCE: avernet              NODE : 000  HOSTNAME: huashan.local  FUNCTION: DB2 UDB, SQO Memory Management, sqloMemCreateSingleSegment, probe:100  CALLED  : OS, -, shmget                           OSERR: 17  

Note that for the install to go through, I increased the OS X shared memory, per this recommendation.

Connection to local SQL Server 2012 can be established from SSMS 2008 but not from SSMS 2012

Posted: 03 Jul 2013 03:47 PM PDT

I have two local SQL Server instances running on my local machine. The first is SQL Server 2008 R2 Enterprise Edition (named MSSQLSERVER) and the 2nd is SQL Server 2012 Business Intelligence Edition.

My problem is with SSMS 2012 which can connect to distant servers but not the local 2012 instance; I can however connect to this instance from SSMS 2008.

The error message I get when trying to login is

Login Failed. The login is from an untrusted domain and cannot be used with Windows Authentication. (Microsoft SQL Server, Error: 18452)

I must point out that I don't have the necessary privileges to access SQL Server Configuration Manager (blocked by group policy).

Any help would be appreciated.

Is there a way to find the least recently used tables in a schema?

Posted: 03 Jul 2013 05:47 PM PDT

Is there a way to find the least recently used tables in a MySQL schema? Besides going into data directories? I was hoping there was a metadata or status trick-- but Update_Time in STATUS and INFORMATION_SCHEMA is always NULL.

Optimize command issuing "'View is not base table" and "Corrupt" error messages

Posted: 03 Jul 2013 01:46 PM PDT

I have no experience with MySQL. My boss ran an optimize against a MySQL database. Against a bunch of views we got the message

"tablexyz" is not BASE TABLE

and the next has message is

Corrupt

It seems to be only against views, we didn't get any of these error messages against the base tables.

Does this look like an incorrect error message, or do we have issues with our tables?

How to find Oracle home information on Unix?

Posted: 03 Jul 2013 08:47 PM PDT

Need help finding Oracle home path corresponding to a database instance in RAC environment. I am aware of few of the ways to achieve the same. Listing them below to avoid the same answers.

  1. /etc/oratab This file is not mandatory and hence may not have all instance information.

  2. Parsing contents of Listener.ora In RAC environment, the listener.ora can be located at non default location.

  3. use TNS_ADMIN to find Listener.ora location and parse the file.

  4. ORACLE_HOME env variable May not be set always.

  5. ps -ef | grep tns to get the home path from service name. Gives path for currently running listener

  6. select "SYSMAN"."MGMT$TARGET_COMPONENTS"."HOME_LOCATION"
    from "SYSMAN"."MGMT$TARGET_COMPONENTS"
    where "SYSMAN"."MGMT$TARGET_COMPONENTS"."TARGET_NAME" = <Database SID>

    The schema sysman can be dropped after first time login to oracle.

  7. SELECT NVL(SUBSTR(FILE_SPEC, 1, INSTR(FILE_SPEC, '\', -1, 2) -1) , SUBSTR(FILE_SPEC, 1, INSTR(FILE_SPEC, '/', -1, 2) -1)) FOLDER
    FROM DBA_LIBRARIES
    WHERE LIBRARY_NAME = 'DBMS_SUMADV_LIB';

    So if a DBA changes Oracle Home (and hence the location of libqsmashr.so) after installation of Oracle, the path retrieved from above query would be invalid.

  8. . oraenv Works only for 11g

I am trying to find out a generic way which will work for all Oracle versions and it should not be dependent on anything which is not useful to DBA.

Do you have any way other than listed above to do the same?

Many Thanks in advance.

How to handle "many columns" in OLAP RDBMS

Posted: 03 Jul 2013 12:31 PM PDT

I have a fact that has around 1K different numerical attributes (i.e. columns). I would like to store this in to a column-oriented DB and perform cube analysis on it.

I tried to design a star schema, but I'm not sure how to handle this many columns. Normalising it sounds wrong, but I can't just have flat columns either. The combination of attributes are also too diverse to have a simple dimension table for this, even if I'd reduce the numerical values into categories (ranges), which is an option. I thought about storing them as XML or JSON for each row, but that doesn't sound great either.

If it helps, I'm planning to use Amazon's redshift for the DB.

Note: We have strong preference for RedShift as it fits perfectly for at least other few operations we do on this data. Hence I want to avoid other technologies like HBase if possible.

Mysql DB server hits 400% CPU

Posted: 03 Jul 2013 11:46 AM PDT

I have been facing problem with my database server quite a month, Below are the observations that I see when it hits the top.

 - load average 40 to 50   - CPU % - 400%    - idle % - 45%   - wait % - 11%   - vmstat procs r-> 14 and b-> 5   

And then drains down within 5 minutes. And when I check the show processlist I see queries for DML and SQL are halted for some minutes. And it processes very slowly. Whereas each query are indexed appropriately and there will be no delay most of the time it returns less than 1 second for any query that are being executed to server the application.

  • Mysql Version : 5.0.77
  • OS : CentOS 5.4
  • Mem: 16GB RAM (80% allocated to INNODB_BUFFER_POOL_SIZE)
  • Database Size: 450 GB
  • 16 Processor & 4 cores
  • Not in per-table model.
  • TPS ranges 50 to 200.
  • Master to a Slave of the same configuration and seconds behind is 0.

Below url shows show innodb status \G and show open tables; at the time spike. And this reduced within 5 minutes. Sometimes rare scenarios like once in two months I see the processes takes more than 5 to 8 hours to drain normal. All time I notice the load processor utilization and how it gradually splits its task and keep monitoring the process and innodb status and IO status. I need not do anything to bring it down. It servers the applications promptly and after some time it drains down to normal. Can you find anything suspicious in the url if any locks or OS waits any suggestion to initially triage with or what could have caused such spikes ?

http://tinyurl.com/bm5v4pl -> "show innodb status \G and show open tables at DB spikes."

Also there are some concerns that I would like to share with you.

  1. Recently I have seen a table that gets inserts only about 60 per second. It predominantly locks for a while waiting for auto-inc to get released. And thus subsequent inserts stays in the processlist tray. After a while the table gets IN_USE of about 30 threads and later I don't know what it makes to free them and clears the tray. (At this duration the load goes more than 15 for 5 minutes)

  2. Suppose if you say application functionality should be shapped to best suite the DB server to react. There are 3 to 5 functionalities each are independent entities in schema wise. Whenever I see the locks it gets affected to all other schemas too.

  3. Now what makes best fuzzy is the last one. I see slave keeps in synch with master with a delay of 0 second all time whereas slave has a single thread SQL operation that is passed from relay IO that which acts in FIFO model from the binary logs where Master had generated. When this single headed slave can keep the load less and have the operations upto-date, should the concurrent hits are really made to be concurrent for the functionalities which I assume making the possible IO locks in OS level. Can this be organized in application itself and keep the concurrent tenure density thinner?

Error 1044 Access denied to user

Posted: 03 Jul 2013 04:47 PM PDT

This is driving me crazy.

When I try to create a few tables from my Workbench model I get this error 1044.

I've been trying to find a solution but nothing works for me.

Curiously when I run SELECT USER(),CURRENT_USER(); I get two versions of the same user. One is techboy@(an ip address) and the other is techboy@%.

Access is denied to both at one point or another.

The MySql server is a remote hosted server with the user permissions correctly set.

Multiple database servers for performance vs failover

Posted: 03 Jul 2013 06:47 PM PDT

If I have two database servers, and I am looking for maximum performance vs high-availability, what configuration would be best?

Assuming the architecture is two load-balanced web/app servers in front of two db servers, will I be able to have both db servers active with synced data, with web1 to db1, web2 to db2 setup? Is this active/active?

I'm also aware that the two db servers can have their own schema to manually 'split' the db needs of the app. In this case daily backups would be fine. We don't have 'mission critical data.'

If it matters, we have traffic around 3,000-7,000 simultaneous users.

Advices for modeling facts, time hierarchies based on two conditions

Posted: 03 Jul 2013 03:08 PM PDT

Sorry if my question does not belong to that SE but I tried to find the best place and it ended up here.

My problem is this :

I've got a table modeled as this in our datawarehouse env. :

MYTABLE (service_id, date_input, date_when_active, date_when_inactive, bunch_of_values...)  

Each night this table is updated with new records from our prod env. When a record changes in prod and already exist in dw, a new record is created in dw with the same serviceid and the old record gets updated with sysdate in when_inactive field. The newly created record has its when_inactive field to null.

We use a query to extract data from this table using some conditions to get exactly the value of all services in a precise period. The query goes like this :

select *  from mytable  where service_date_input between :p_date_start and :p_date_end  and :p_date_end between date_when_active and nvl(date_when_inactive,:p_date_end)  

What I would like to do is to model a time hierarchy in Cognos Framework Manager so I can drill through fiscal years, periods, weeks, months, etc.

What I can't figure out is if I need to remodel the table to include calculated fiscal_year_id, period_id, week_id, month_id, etc. directly in the DB or is it possible to do it directly in Cognos FWM?

One of the requirement would be to not force the use of two date sets on the user (filter by input date AND fiscal year). The input date should be filtered transparently : When you use a fiscal year parameter to filter or display the data, the input date rule should apply accordingly.

Slow backup and extremely slow restores

Posted: 03 Jul 2013 12:46 PM PDT

I don't normally work with MySQL but with MS-SQL and am having issues restoring a dump backup of a 9 GB database. I converted it to MS-SQL and it takes a grand total of 4 minutes to restore but the MySQL DB takes over an hour on the same server. The MySQL database is using InnoDB but is there an alternative to speeding up the restores? Both databases are on the same machine, Windows 2008R2 in a VM with a dymanic SANs.

Correction - it takes MS-SQL 1 minute to restore, 1 hour to restore the same database in MySQL

EDIT: mysql.ini (with commented lines removed):

[client]  no-beep  port=3306  [mysql]  default-character-set=utf8  [mysqld]  port=3306  basedir="C:\Program Files\MySQL\MySQL Server 5.5\"  datadir="C:\ProgramData\MySQL\MySQL Server 5.5\data\"  character-set-server=utf8  default-storage-engine=INNODB  sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"  log-output=NONE  general-log=0  general_log_file="CM999-SV510.log"  slow-query-log=0  slow_query_log_file="CM999-SV510-slow.log"  long_query_time=10  log-error="CM999-SV510.err"  max_connections=100  query_cache_size=0  table_cache=256  tmp_table_size=22M  thread_cache_size=8  myisam_max_sort_file_size=100G  myisam_sort_buffer_size=43M  key_buffer_size=8M  read_buffer_size=64K  read_rnd_buffer_size=256K  sort_buffer_size=256K  innodb_additional_mem_pool_size=4M  innodb_flush_log_at_trx_commit=1  innodb_log_buffer_size=2M  innodb_buffer_pool_size=124M  innodb_log_file_size=63M  innodb_thread_concurrency=9  

TempDB data files don't shrink well on SQL 2008

Posted: 03 Jul 2013 07:47 PM PDT

So ideally you want to pre-size your TempDB data and log files appropriately so that this isn't a concern, but sometimes a rogue developer runs a crazy huge query on a production server during work hours, causing the TempDB data files to blow up huge.

If TempDB was literally the only thing on the drive, then I could probably just leave it like that, but on some servers I have several SQL instances that all share the same TempDB drive.

So how can I shrink these TempDB data files without restarting the instance?

I normally try:

DBCC SHRINKFILE (name = 'tempdev', size = 5000)  

This worked fairly consistently in SQL 2000 and 2005 (assuming the actual tempdb activity had tapered off), but seems to not work very often in 2008 (on my current server, it only worked on 1 of 4 data files, the others continue to remain 2-3x larger).

[Articles] Who's Got Your Data?

[Articles] Who's Got Your Data?


Who's Got Your Data?

Posted: 02 Jul 2013 11:00 PM PDT

The state of data security might get worse in the future as more companies gather and share more data.

[MS SQL Server] Procedure Cache Size: Monitoring and Adjusting

[MS SQL Server] Procedure Cache Size: Monitoring and Adjusting


Procedure Cache Size: Monitoring and Adjusting

Posted: 03 Jul 2013 04:45 AM PDT

So, I'm looking into ways to increase procedure cache size, and I was wondering if someone could help me figure out how to monitor it, and how to change it. Can I get some help to start on this issue?

Overhead on a NC index if the key fields are not updated during production.

Posted: 02 Jul 2013 09:32 AM PDT

My next question while I am in the INDEX zone.As an example:I have a large table (4 - 5 million rows) and I have say five NC indexes on the fields being used in WHERE and JOIN clauses for the incoming queries.If their is never [or minimal] UPDATES on any of the fields in the NC clustered indexes (INCLUDE fields as well) then is it fair to say that their would only be minimal operational overhead related to updating the NC indexes? i.e UPDATES will happen very frequently to the data in the table but only on non NC indexed fields so only the Clustered index would need to be updated?I guess their will be other overhead of keeping these indexes INSERTS, query plan compilation, memory used etc?Obviously maintenance will need to be performed but I am talking about production hours activity.thanks

Right click on a Database, select properties Error, cannot show requested dialog.

Posted: 02 Jul 2013 11:54 PM PDT

I right click on a Database, select properties Error, cannot show requested dialog.The error reads:cannot show requested dialog (SqlMgmt)Property Size is not available for Database '[DataWarehouse]'. This property may not exist for this object, or may not be retrievable due to insufficient access rights. (Microsoft.SQLServer.Smo)I'm running Database Consistency Checker.Has anyone experienced this problem?

Restore - Exclusive access could not be obtained because database is in use.

Posted: 02 Jul 2013 11:11 PM PDT

Even with setting the database to SINGLE_USER WITH ROLLBACK IMMEDIATE (on the line immediately prior to the restore command), I am getting these errors occasionally on nightly restore:Exclusive access could not be obtained because the database is in use. [SQLSTATE 42000] (Error 3101) RESTORE DATABASE is terminating abnormally. [SQLSTATE 42000] (Error 3013). Exact code is below. Really can't wrap my head around this, if it got to restore database command, then the single user command must've been successful, right? Could something else steal the single connection away from me while this is running?[code="sql"]ALTER DATABASE DbName SET SINGLE_USER WITH ROLLBACK IMMEDIATE; RESTORE DATABASE [DbName] FROM DISK = N'Z:\DbName.bak' WITH FILE = 1, MOVE N'DbName' TO N'D:\SqlData\DbName.mdf', MOVE N'DbName_log' TO N'D:\SqlData\DbName.ldf', NOUNLOAD, REPLACE, STATS = 10GOALTER DATABASE DbName SET RECOVERY SIMPLE;ALTER DATABASE DbName SET MULTI_USER;GO[/code]

need to delete the datbase and then restore of the different backup on daily bases

Posted: 02 Jul 2013 11:40 PM PDT

Good Morning I would appreciate if I can get a syntax to run, this is what I need to accomplish,Delete the database, (it gives me error with connections open)Once it is deleted I want to restore it of .bak file,Help is much appreciated,Regards,Bubby

Finding unused fields

Posted: 02 Jul 2013 07:00 AM PDT

We have a db with 388 tables/5,950 fields. I was just asked if I could find any fields (in the entire freaking db) that have never had any data populated in them. YIKES! Is there a quick and simple way to do this?

Creating a DBA Administration Database

Posted: 02 Jul 2013 08:47 PM PDT

Good Morning / Afternoon / Evening (depending on where you are)I'm going to be setting up a DBA Admin database to replace an old outdated one that contained very minimal logging and would like the shiny new one to collect all the useful information that is available to us (at the moment in 2005 but will soon be 2012) around database performance and statistics. I've been searching quite a bit on this to get advice and best practice but either my Googlefu is failing or there is not much info out there on this.In terms of what I would like help with is some info on:- What is best practice- What tables should it contain- What scripts to run- What jobs to set-up- What tables should it contain- Anything not contained in the above that I have missed and would be helpful knowing :hehe:Any scripts and examples of databases ect would be welcomed :-)Thanks allAndy

Urgent Issue Reg Checkdb issue please respond

Posted: 02 Jul 2013 11:21 AM PDT

Hi guys ,I am trying to run checkdb in my prod its failing from last 2 months because of tempdb space issue , but on another server with same db size and tempdb size its executed successfully.my question the one which is failing that is not properly oragnized i mean no proper indexes but the other one has all relational properties do checkdb operates depending on how critical of database(i mean indexes and type of data)will it operate differently on heap and index tables

New NC Index or new INCLUDE on Existing Index

Posted: 02 Jul 2013 09:16 AM PDT

I have a query that does not run very frequently but when it does it table scans a large table so causes issues.This query has only one value in the WHERE clause and returns 21 fields.The value in the WHERE clause is not in any NC indexes.[b]Initial Execution:[/b]Clustered Index Table Scan:1878874 Reads 181902msI tested two scenarios.[b]1. [/b]Add a new NC index for the WHERE clause value. To many fields to cover so a bookmark look up will be inevitable.The query uses the new index and peforms a SEEK.7 Reads4ms total time151mb Index Size[b]2[/b]. Add an INCLUDE column to an existing NC Index containing the WHERE clause value (remove new NC index first)The query now performs a SCAN but on the newly created INCLUDE data.29647 Reads3277ms total time101mb additional size to NC indexSo it looks like adding a new NC index is the better option.As a rough guess would you say the small Index size increase and the overhead of maintaining all Index levels (rather than just the leaf for an INCLUDE field?) would mean adding a new NC index is the better of the two options?I could live with either option but if their is little difference then it would make sense to add a new NC index.As a side note: I ran the offending query through the DTA as work load file and as expected it came back with a new index as the recommendation but it also included all 20 fields in the INCLUDE list which I thought in this case was a little pointless.

[SQL 2012] A question about replacing a character in a nvarchar

[SQL 2012] A question about replacing a character in a nvarchar


A question about replacing a character in a nvarchar

Posted: 03 Jul 2013 01:04 AM PDT

Hello,First of all, i apologize if this was asked before. I did search before asking, so please don't get mad:-)I have a string (nvarchar). Using replace, i can replace a character in this string with another one.But what if i wanted to do something like this:[code="sql"]select replace(replace('abc', 'a', 'b'), 'b', 'z')[/code]Not only that, but what if i want to replace 10 characters. The code will look monstrous:[code="sql"]SELECT REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE('abcdefghijklmn', 'a', 'b'), 'b', 'c'), 'c', 'd'), 'd', 'e'), 'e', 'f'), 'f', 'g'), 'g', 'h'), 'h', 'j'), 'j', 'k'), 'k', 'i')[/code]Is there a better way, that will not sacrifice the performance? I tried using a recursive CTE, but the performance is really really bad.Thank you for your time:)Cheers,Radu

Restoring a backup of an existing DB to a new name on the same server and getting exclusive lock error

Posted: 03 Jul 2013 02:43 AM PDT

I am attempting to restore a version of my current database ("JamestonDB") to a new name ("JamestonDB0628"). I have been successfull restoring other databases but this one is giving me "cannot obtain exclusive lock error". This database is more heavily used, but since I am creating a "new" database I do not understand the error.

2012 and Always on - different security access?

Posted: 03 Jul 2013 01:16 AM PDT

Here is what I would like to do and wanted to know if possible.With always on, have one active node for production applications, one node for reporting with different logins permitted, and one node for backups.Is it possible to have a different set of logins per node?I am trying to sell 2012 to where I work and one thing we are looking at is using transactional replication in 2008 R2 or go to always on in 2012. Just depends on the security options.Thank you.Kameron

Stop users from expanding the Tables nodes in Object Explorer

Posted: 03 Jul 2013 12:49 AM PDT

I would like to know if there is a way to stop users from being able to see the tables of my database by stopping users from being able to expand the Table node in the Object Explorer?

SQL Developer Edition Question

Posted: 02 Jul 2013 03:23 AM PDT

Is it possible to restrict SQL Developer edition to only allow features in Standard edition? Our production environment is Standard Edition, and we are considering using Dev Edition in our dev environment due to the licensing, but we want to make sure whatever breaks in prod due to editions restrictions will break in dev.

Log Shipping Backup Very Slow (sqllogship.exe)

Posted: 02 Jul 2013 06:14 AM PDT

Hello, We have a SQL Server instance with ~200 DBs on it. Each one is log shipped to a remote site. To avoid all jobs going off at once, I've scripted sqllogship.exe to run in 1 job sequentially. Anyway, I am finding that each DB is taking 30 seconds to complete. I have log shipping on servers with ~100 DBs similarly sized and with similar data that works fine, taking seconds on each DB. I have invoked sqllogship.exe from the command prompt and indeed it took 30 seconds. So the problem lies within sqllogship.exe. I thought perhaps these 200 DBs have created a large MSDB that is slowing down sqllogship so I ran sp_delete_backuphistory and eventually deleted everything but the last 30 days, which leaves about 100k records, which is very small compared to the other servers that have a much faster time running sqllogship. I also added some indexes I thought could help (backupset(media_set_id), restorefile(restore_history_id), restorefilegroup(restore_history_id)) but they did not affect the 30 second runtime. What is odd is that it has been 30 seconds exactly on several databases. Another clue is that backupset logs the backups being made by sqllogship.exe as being finished in a second or so...so where are the other 29 seconds coming from? I tried setting verboselevel to 0 in sqllogship but that doesn't help either. All of our servers have this version:Microsoft SQL Server 2012 (SP1) - 11.0.3368.0 (X64) Enterprise Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1)Thanks!

Setting up SQL Server 2012 AlwaysOn, can't get error msgs to clear

Posted: 07 Mar 2013 09:00 AM PST

I'm trying to set up an SQL 2012 Availability Group between 'ServerA' and 'ServerB'On the source server (ServerA) I get the following message in the 'AlwaysOn High Availability' tab in SQL Server Configuration Manager: 'An error occurred while loading the AlwaysOn High Availability properties [return code: 0x80070005]. This server is running Windows Server 2008 R2 Enterprise and SQL Server 2012 SP1 Enterprise edition, but I can't confirm if the hotfix in KB2494036 has been installed as I don't have admin rights on the server (don't ask). The server admin said he was going to install it last night but I haven't confirmed with him, and he's left the office for the night.On the destination server (ServerB) I get the standard error message saying that SQL 2012 Enterprise Edition, Windows Server 2008 and KB2494036 need to be installed. This server is also running Windows Server 2008 R2 Enterprise and SQL Server 2012 SP1 Enterprise edition, and I can confirm the hotfix has been installed.When I try to run the Availability Group wizard on ServerA, on the Replicas tab, when I put in the credentials for ServerB, I get the error message: "Cannot connect to QA-SQL-LOAD1\VRS70. AlwaysOn feature is not enabled on SQL Server instance 'QA-SQL-LOAD1\VRS70'. (Microsoft.SqlServer.Management.HadrTasks)"Getting clear installation and troubleshooting documentation on setting this up has been quite challenging (which might be part of my problem...) - does anyone have any ideas on where to go from here?TIA,Mike

Copy Tables to Another DB on Same Server - Error

Posted: 02 Jul 2013 07:36 AM PDT

As I process each month, I am copying some of the tables in my DB (db1) to another DB (ab1_archive)on the same server. This is to allow users access to older data from a web front-end. The tables are renamed from tbl1 to tbl1_201305 - the suffix being year and month.This is done in a stored proc in db1. It worked once, but is now giving error that either db1_archive does not exist or there is no permission. I am not sure what changed. Some of the tables are getting copied though. What am I doing wrong? is there a permission on the table that I need to change? If so, how? Thanks in advance.[code="sql"]BEGIN if @thruPeriod is null or @thruPeriod = '' BEGIN PRINT(N'The value cannot be null or empty.') END ELSE BEGIN --final output EXEC ('SELECT * INTO tbl1_' + @thruPeriod + ' FROM tbl1') ENDEND[/code]

Search This Blog