Friday, March 15, 2013

[how to] oracle huge hash_area_size

[how to] oracle huge hash_area_size


oracle huge hash_area_size

Posted: 15 Mar 2013 08:09 PM PDT

I am wondering what is the maximum setting for hash_area_size parameter? I'm trying to perform a huge join which ends up allocating 10Gb of temp space and completes in an hour. 10Gb is not that much by today's standards. I have the RAM available in the system. Should be possible to process that in memory. Is there a way to tell oracle to do that? When I try to set hash_area_size to 10Gb it complains because it seems to be limited to 32bit integers.

Cannot initialize the data source object of OLE DB provider “Microsoft.ACE.OLEDB.12.0” for linked server

Posted: 15 Mar 2013 02:25 PM PDT

I'm trying to establish a linked server from SQL Server 2008 R2 to an Access database. The Access database is not password protected. As I have seen in other posts (specifically this one), this is usually due to the current user not having access to the Temp folder inside the NetworkService folder. Since we are running this in Windows Server 2008, the directory structure is quite different than what most are eluding to. I have allowed access to every temp folder in the "root:\Documents and Settings\" directory, and to no avail.

This is the procedure I am using to add the linked server:

EXEC sp_addlinkedserver   @server = N'OS_Access',   @provider = N'Microsoft.ACE.OLEDB.12.0',   @srvproduct = N'',  @datasrc = N'C:\RTBData\Data\OS.mdb';  GO  

It creates the linked server with no problem, but I am not able to view the tables/views of the database. Likewise, my ASP.NET application cannot access it either.

I have tried both ACE and JET(64-bit)(by installing the Data Connectivity Components for Office), and they both do not work. I have also tried configuring the ACE provider with "Dynamic Parameter" and "Allow InProcess" to true.

Additionally, I tried upping the memory usage by the MSSQLSERVER services by adding "-g512;" in front of "startup parameters" string in SQL configuration manager to rule out memory issues.

If anyone could shed some light on this that would be fantastic! Thanks!

Mysql 5.6 Failure

Posted: 15 Mar 2013 08:25 PM PDT

I have the following in RPM_UPGRADE_HISTORY in /var/lib/mysql

MySQL RPM upgrade to version 5.6.10-1.linux_glibc2.5  'pre' step running at Fri Mar 15 15:41:26 EDT 2013    ERR file(s):  -rw-r----- 1 mysql root 1725 2013-03-15 13:45 /var/lib/mysql/mysql.err  

I check the error file to find this information:

2013-03-15 15:43:22 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).  2013-03-15 15:43:22 11993 [Note] Plugin 'FEDERATED' is disabled.  2013-03-15 15:43:22 11993 [Note] InnoDB: The InnoDB memory heap is disabled  2013-03-15 15:43:22 11993 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins  2013-03-15 15:43:22 11993 [Note] InnoDB: Compressed tables use zlib 1.2.3  2013-03-15 15:43:22 11993 [Note] InnoDB: CPU does not support crc32 instructions  2013-03-15 15:43:22 11993 [Note] InnoDB: Using Linux native AIO  2013-03-15 15:43:22 11993 [Note] InnoDB: Initializing buffer pool, size = 128.0M  2013-03-15 15:43:22 11993 [Note] InnoDB: Completed initialization of buffer pool  2013-03-15 15:43:22 11993 [ERROR] InnoDB: auto-extending data file ./ibdata1 is of a different size 640 pages (rounded down to MB) than specified in the .cnf file: initial 768 pages, max 0 (relevant if non-zero) pages!  2013-03-15 15:43:22 11993 [ERROR] InnoDB: Could not open or create the system tablespace. If you tried to add new data files to the system tablespace, and it failed here, you should now edit innodb_data_file_path in my.cnf back to what it was, and remove the new ibdata files InnoDB created in this failed attempt. InnoDB only wrote those files full of zeros, but did not yet use them in any way. But be careful: do not remove old data files which contain your precious data!  2013-03-15 15:43:22 11993 [ERROR] Plugin 'InnoDB' init function returned error.  2013-03-15 15:43:22 11993 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.  2013-03-15 15:43:22 11993 [ERROR] Unknown/unsupported storage engine: InnoDB  2013-03-15 15:43:22 11993 [ERROR] Aborting    2013-03-15 15:43:22 11993 [Note] Binlog end  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'partition'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'BLACKHOLE'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'PERFORMANCE_SCHEMA'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'ARCHIVE'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_SYS_DATAFILES'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_SYS_TABLESPACES'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN_COLS'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_SYS_FIELDS'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_SYS_COLUMNS'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_SYS_INDEXES'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_SYS_TABLESTATS'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_SYS_TABLES'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_FT_INDEX_TABLE'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_FT_INDEX_CACHE'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_FT_CONFIG'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_FT_BEING_DELETED'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_FT_DELETED'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_FT_INSERTED'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_FT_DEFAULT_STOPWORD'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_METRICS'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_BUFFER_POOL_STATS'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE_LRU'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX_RESET'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_CMPMEM_RESET'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_CMPMEM'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_CMP_RESET'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_CMP'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_LOCK_WAITS'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_LOCKS'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_TRX'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'MRG_MYISAM'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'MEMORY'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'CSV'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'MyISAM'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'sha256_password'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'mysql_old_password'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'mysql_native_password'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'binlog'  2013-03-15 15:43:22 11993 [Note] /usr/sbin/mysqld: Shutdown complete    130315 15:43:22 mysqld_safe mysqld from pid file /var/lib/mysql/mysql.pid ended  

I installed using rpm package on MySQL web site. What went wrong?

After Rackspace server creation, PostgreSQL query planner doesn't work as expected

Posted: 15 Mar 2013 06:32 PM PDT

We created an image of one of our database servers in Rackspace. Then, we created a new server using that image, expecting things to work. However, the index performance we have seen seems to be bad, and even the query plans, when comparing the explain analyze outputs from the first server to the second one, have been different. Basically, the second server - the one that was restored from a saved image - does not seem to rely on indexes as it should.

Is there something related to image restore that would cause this behavior? If so, what may need to be done to fix the issue? Should I look somewhere else?

Thanks very much! I can provide more information if needed, so let me know if that would be useful.

Create Scheme that allows for fluidity between entities

Posted: 15 Mar 2013 01:16 PM PDT

We are a small rapidly evolving company and recently investigated switching databases from SQL Server to a graph database. We like the graph approach because of the fluidity and openness that it offered.

For instance, with a graph DB you can add or remove new relationships quite easily. However, in an RDBMS it is much more difficult.

Is it possible to have some or all of the fluidity in a RDBMS like SQL Server?

Database Tasks in C#

Posted: 15 Mar 2013 07:02 PM PDT

We have 2 database environments a "Production" Database and a "Reporting" Database located on different physical windows server 2003 boxes. The reporting Db is derived off the Production database and contains some modified tables schema's and additional "Pre Cooked" Data tables which would be too costly to otherwise process on the fly.

What I had originally created was a database script (MS SQL) on our Reporting Server to grab the Live Production database at 1AM and restore it in our Reporting environment by running all the scripts to get it up and going such as the tables restores, and modification of the data. The problem with this approach is that I quickly began to realize a) how long this approach takes (there is no option of putting a 'modifieddate' on any production records) and b) how a junior DB developer (me) should probably not dive into this stuff yet because he's more of a programmer :).

Would it be part of the 'best practice methodology' to create a windows service which basically runs the restore at 1AM and sends the email if the process failed? I'm not sure if writing a c# component is going to allow me the level of granularity that MS SQL Scripts allow. So by using a programming language to accomplish this task - is this nuts and should this all be done in the database?

How would you recommend I run the syncing of the two environments?

Let me know what you think!

What is the difference between leaf and non-leaf In SQL Server?

Posted: 15 Mar 2013 08:28 PM PDT

I've been running some index usage reports, and I'm trying to get a definition of Leaf and Non-leaf. There seem to be both Leaf and Non-leaf Inserts, Updates, Deletes, Page Merges, and Page Allocations. I really don't know what it means, or if one is better than the other.

If someone could give a simple definition of each, and also explain why Leaf or Non-leaf matters, it would be appreciated!

Best way to perform SQL Server replication over multiple servers?

Posted: 15 Mar 2013 12:00 PM PDT

My current setup requires me to use three separate SQL Server 2012 instances (installed on 3 separate machines, separated geographically) and have the data be replicated across all 3. Any changes made in the 7 tables within my database need to be replicated based on where the change was made, for example, if I add a record on Server A, it needs to be added on Server B and Server C. If a record is added on Server B, it needs to be added on Server A and Server C. Same thing with Server C.

From what I've read, Bidirectional Transactional Replication seems to be the best avenue for me to use in SQL Server 2012 Standard. The issue that I'm having is that no one has a very good guide to Bidirectional Transactional Replication that really explains much. I think I've poured over Hilary Cotter and Microsoft's information a million times and I still can't seem to get it to work. The fluff data that they provide in examples just isn't practical and no one explains how to do it between two physical machines; they assume you're using the same machine and just replicating between another instance.

Am I doing something wrong or am I going about this the wrong way?

MySQL backup InnoDB

Posted: 15 Mar 2013 10:34 AM PDT

I have a VoIP server running 24x7. At low peak hour at lease 150+ users are connected. My server has MySQL running with InnoDB engine on Windows 2008 platform. I like to take at least 2 times full database backup without shutting down my service.

As per Peter Zaitsev - the founder of percona, mysqldump –single-transaction is not always good.

read here if you are interested

As I'm not a DBA, I like to know in my scenario, which would be best solution to take a database backup?

Thanks,

Collect CPU, reads and writes when replaying SQL Profiler Trace

Posted: 15 Mar 2013 09:48 AM PDT

I'm trying to replay a trace on a test environment with SQL Profiler in order to benchmark a database prior to making some performance tweaks. I intend to replay the same trace at various stages to compare the stats and measure the improvements.

I've been following the post at keepitsimpleandfast which seems good however I'm not getting the CPU, Reads and Writes data saved when I replay the trace in Step 6..

I had to add the CPU, Reads and writes columns to events when I set up the TSQL-Replay template when I took the trace but I don't have any options in Replay Configuration to select events or columns.

Strangely, I also ran a separate trace first and then replay the trace nothing gets traced at all! The events are different though, instead of RPC: Completed I get Replay Result Set Event and Replay Result Row Event, would this make a difference?

Does anyone know how to either amend the columns logged when you replay a trace OR how to set up another trace to trace the replay?

Getting timeouts and backup errors after mirroring and log shipping is setup

Posted: 15 Mar 2013 01:00 PM PDT

OS is Windows Server 2003 32 bit. SQL Server is SQL Server 2008 SP3 32 bit

Today I finished configuring database mirroring and log shipping for 3 databases on this SQL instance and started receiving timeouts (connection and query timeouts from our applications) and log backup errors (log backup was performed by log shipping executable). Never had this kind of issues before when database mirroring was not active and log backups were performed via our custom SSIS package instead of using log shipping (also previously log backup was setup only for 1 database every 15 minutes and now it is for 3 databases every 5 minutes).

More details:


1) Connection timeout from one of our applications:

Event Text:   System.Data.SqlClient.SqlException: Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding.     at System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject)     at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection)     at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory)     at System.Data.SqlClient.SqlConnection.Open()     at App.Component1.EventBroadcaster.ConnectToSql(SqlConnection connect, Boolean bulk)     at App.Component1.Broadcaster.ReBroadCastLocate(Int32 LocateId, BroadCastType bcType_)    

2) Query timeout from another of our applications:

Event Text:   System.Data.SqlClient.SqlException: Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding.     at App1.RTFS.DBProcess.exeNonQuery(SqlCommand cmd_)     at App1.RTFS.RTDB.submitOrder(NewOrderParams& params_)     at App1.RTFS.NewOrderMessageProcess.processSingle()     at App1.RTFS.NewOrderMessageProcess.processSubmit()     at App1.RTFS.NewOrderMessageProcess.process()     at App1.RTFS.Application.processMessageRequest(IMessageProcess msgProcess, SessionID sessionID)   

3) Log backup failure (all these are created at the same time in Event log)

Event 3041
BACKUP failed to complete the command BACKUP LOG DB0. Check the backup application log for detailed messages.

Event 701
There is insufficient system memory in resource pool 'default' to run this query.

Event 701
There is insufficient system memory in resource pool 'internal' to run this query.

These errors are not that frequent 5-6 times a day


I think error messages related to failed log backup ("There is insufficient system memory in resource pool") indicate possible bottleneck. From what I found possible solution is to allocate more memory for SQL Server by using AWE. However, this server only has 4GB of physical RAM and SQL Server is already using 1.8GB (as shown by task manager).

How can I better troubleshoot these issues? Is there a way to see detailed memory situation for SQL Server and trace exactly what queries/connections are timing out?

Ms-Access 2007-2010 SQL - from single row record how to create multiple rows

Posted: 15 Mar 2013 01:39 PM PDT

I need help in below table updation using SQL in Ms-Access 2007-2010 database.

I've a table called table1 which has below entries:

  table1:  --------------------------------------------------  |  col1  |   col2   |          col3         |  -------------------------------------------------  |    A1  |  abcd    |      123;456;789      |  -------------------------------------------------  |    B2  |  efgh    |       246;369         |  --------------------------------------------------  |    C3  |   ijk    |                       |  --------------------------------------------------   

Using SQL, I want to change the above table (particularly the column col3 multiple values into individual multiple rows) as shown below:

  --------------------------------------------------  |  col1  |   col2   |          col3       |  -------------------------------------------------  |    A1  |  abcd    |         123         |  -------------------------------------------------  |    A1  |  abcd    |         456         |  -------------------------------------------------  |    A1  |  abcd    |         789         |  -------------------------------------------------  |    B2  |  efgh    |         246         |  --------------------------------------------------  |    B2  |  efgh    |         369         |  -------------------------------------------------  |    C3  |   ijk    |                     |  --------------------------------------------------  

How do I do it? I tried various forums. but could not come up with exact solution so far.

Appreciate your help.

Thanks, Jak.

SQL Server 2008 R2 replication high delivery latency

Posted: 15 Mar 2013 09:25 AM PDT

I am seeing an unusually high delivery latency between our distributor and subscribers and i do not understand why.

We have in this configuration 3 sql servers using transactional push replication to replicate data from one master server to two reporting servers.

We have 9 publications. The distribution agent for most publications are showing under 5ms but one is show as 2000+ms to both subscribers.

The suspect publication has only 4 small articles (tables) that rarely, if ever, change. Ive checked and each table has an primary key.

ive also checked the @status parameter for each article according to the MS KB: The distribution agent may experience high latency when you configure transactional replication with articles that are configured not to replicate changes as parameterized statements

Im tempted to start droping articles to find out if one particular table is the culprit.

Doe anyone have any suggestions as to what I can look at?

Refresh Postgres Linked Tables in MS Access

Posted: 15 Mar 2013 10:42 AM PDT

Does anybody knows a VBA procedure that automatically link and refresh Postgres linked tables (via ODBC) in MS-Access 2010?

How can I verify I'm using SSL to connect to mysql?

Posted: 15 Mar 2013 02:23 PM PDT

I have configured my server to allow SSL, and have modified my client ~/.my.cnf so I use SSL:

[client]  ssl  ssl-cipher=DHE-RSA-AES256-SHA  ssl-ca=~/certs/ca-cert.pem  

When I log in with my client and view the status, it lists a cipher on the SSL line:

mysql> \s  --------------  SSL:            Cipher in use is DHE-RSA-AES256-SHA  

Without installing something like wireshark to verify that the connection is secure, can I assume that I'm connecting via SSL based on this information?

Database design that handles data growth with time

Posted: 15 Mar 2013 12:43 PM PDT

I have a big table that stores video rental data with the following columns:

id, title, language, duration, owner, remarks, closing_date_for_loan

Assuming every day there are thousands of data inserts to this table, then within a year, I could have a million rows of data. The search on the data record involves range query on several columns and will always contain a WHERE closing_date_for_loan > NOW() condition.

To maintain this table, I could perform a query to transfer data with closing_date_for_loan < NOW() out of this big table periodically so that the table won't get too big causing excessive query times. I am looking for a more automated way of handling such data growth similar to how a log rotation works. Any ideas?

Additional Note:

I have tested a few composite index and the query time can range from a few seconds to 50s if the row gets to 5 million. Range queries can be hard to optimize, so I am looking for other ways like keeping the table to a manageable size.

Strange characters in mysqlbinlog output

Posted: 15 Mar 2013 08:26 AM PDT

Has anyone experienced this? Data replicates fine but when output in mysqlbinlog there are hidden characters that break the input?

  • mysqlbinlog Ver 3.3 for Linux at x86_64
  • mysql 5.5.28 server

Thanks! Julie

High Buffer I/O SQL Server 2008 R2 causes?

Posted: 15 Mar 2013 08:17 AM PDT

Am trying to troubleshoot performance on an SQL server that we have for our document management server and today performance has hit an all time low. Looking at the MDW I setup recently I can see a huge spike in the Buffer I/O. The combined SQL Wait time is in the region of 1500-2000 ms which is considerably higher than normal (around 500ms). The large proportion of this increase is Buffer I/O.

Not being a DBA and doing this out of neccessity I am massively out of my depth. Is there any way to tell why there has been a large increase in buffer i/o? Is this likely due to a query pulling large amounts of data or incorrect config of sql memory limits? Is there anything specific (or general for that matter) I should be looking at or DMVs I can query to help troubleshoot?

The server is 2008 R2 as is SQL. It is a VM running two vCPUs and 8GB RAM with the disks hosted on a separate array on a SAN.

enter image description here

EDIT: I think I may have misread this graph on the server. I actually think it is Lock Wait that has increased dramatically not Buffer I/O. The colours are far too similar for me to pick out at a glance :(

Where are PostgreSQL databases stored on my computer?

Posted: 15 Mar 2013 04:40 PM PDT

So this might be a stupid question, but I'm really new to this. So I'm making a Postgres database on the postgres server, that I started up with this command:

sudo -u postgres psql  

And then I did the command:

CREATE DATABASE database;  

Which supposedly created the database. But where is this database? Where can I find it on my computer?

QGIS PostGIS Authentication Failure with "trust" option

Posted: 15 Mar 2013 09:37 AM PDT

I'm trying to setup a PostGIS db using PostGreSQL 9.1 on Ubuntu 12.04. I've set up the database, created a user account "jay" with no password, added spatial functionality following section 2.5 here, and set my pg_hba.conf for local to:

# "local" is for Unix domain socket connections only  local   all             all                                     trust  

I can can connect to the database using the PGAdminIII, requiring no password to connect, but connection fails when I try to connect via PostGIS or with QuantumGIS. Here's a screencap of the connection window in QuantumGIS:
PostGIS connection

Looking into the problem a bit, I came across this post which led me to try connecting via the terminal using psql -U jay -h localhost gis. psql prompted me for a password, I hit enter because I had not set a password, and it returned psql: fe_sendauth: no password supplied. I tried again, this time supplying my Ubuntu user password, which returned

psql: FATAL:  password authentication failed for user "jay"  FATAL:  password authentication failed for user "jay"  

I then tried setting a password for jay using ALTER USER, and entering this password in the prompt, and this failed as well. Clearly, I am unable to connect. However, I'm having trouble figuring out what troubleshooting steps to take next.

Any ideas??

Proper indexes or anything else to optimize

Posted: 15 Mar 2013 08:40 AM PDT

I need help to optimize the query to avoid using "Using temporary", "Using filesort".

CREATE TABLE `target_bahrain_pepsi` (         `id` int(11) DEFAULT NULL,        `col_value` varchar(50) DEFAULT NULL,           `source_file_id` int(11) DEFAULT NULL,         KEY `id` (`id`)   ) ENGINE=InnoDB        SELECT su.adbrand_id, sum(st.col_value) as amount   FROM statex_with_id su  INNER JOIN target_bahrain_pepsi st  ON st.id = su.id   GROUP BY su.adbrand_id   ORDER BY amount DESC ;  

Table statex_with_id has also index.

mysql> EXPLAIN select su.adbrand_id, sum(st.col_value) as amount      -> from statex_with_id su      -> INNER JOIN target_bahrain_pepsi st      -> ON st.id = su.id GROUP BY su.adbrand_id ORDER BY amount DESC ;       +----+-------------+-------+--------+---------------+---------+---------+------------------+---------+----------------------------------------------+   | id | select_type | table | type   | possible_keys | key     | key_len | ref              | rows    | Extra                                        |   +----+-------------+-------+--------+---------------+---------+---------+------------------+---------+----------------------------------------------+   |  1 | SIMPLE      | st    | index  | id            | id      | 58      | NULL             | 1804021 | Using index; Using temporary; Using filesort |   |  1 | SIMPLE      | su    | eq_ref | PRIMARY       | PRIMARY | 4       | dashboard1.st.id |       1 | Using where                                  |   +----+-------------+-------+--------+---------------+---------+---------+------------------+---------+----------------------------------------------+  

Connecting to a SQL Server database from a Flash program

Posted: 15 Mar 2013 10:40 AM PDT

I currently have the ability to utilize Microsoft SQL Server 2012. I am developing a project with Adobe Flash Builder 4.7.

If I link my database with Adobe Flash Builder is there any additional steps I must take in order to make the database live, or as long as my computer is running will this database be accessible from any device that is utilizing it?

In other words is this a LAN only system or does it automatically make itself available for the programs I link to it?

Oracle Express edition on Ubuntu - control file missing

Posted: 15 Mar 2013 01:40 PM PDT

I have installed the Oracle Express edition on Ubuntu as mentioned here.

I am facing issues when I try to create a sample table.

Started oracle

$ sudo service oracle-xe start   Starting Oracle Database 11g Express Edition instance.  

Started sqlplus

$ sqlplus / as sysdba  

Executed the CREATE command

SQL> CREATE TABLE SAMPLE (ID NUMBER);  CREATE TABLE SAMPLE (ID NUMBER)  *** ERROR at line 1: ORA-01109: database not open**  

After a series of research on web, I tried to shutdown and restart oracle:

Shutdown command

SQL> shutdown  ORA-01507: database not mounted  ORACLE instance shut down.  

Started the oracle instance

SQL> startup    ORACLE instance started.  Total System Global Area  688959488 bytes Fixed Size                   2229688 bytes Variable Size             411044424 bytes Database  Buffers          272629760 bytes Redo Buffers                3055616  bytes ORA-00205: error in identifying control file, check alert log  for more info  

I realized that the control file is missing at /u01/app/oracle/oradata/XE. XE is the DB name.

So I tried to create the control file as follows:

SQL> CREATE CONTROlFILE SET DATABASE XE RESETLOGS;    Control file created.  

Tried to create the sample table again

SQL> CREATE TABLE SAMPLE(ID NUMBER);      CREATE TABLE SAMPLE(ID NUMBER)    ERROR at line 1: ORA-01109: database not open  

So I tried to issue the following command

SQL> ALTER DATABASE OPEN RESETLOGS;  ALTER DATABASE OPEN RESETLOGS    ERROR at line 1:  ORA-01194: file 1 needs more recovery to be consistent**  ORA-01110: data file 1: '/u01/app/oracle/product/11.2.0/xe/dbs/dbs1XE.dbf'  

What should be done next? I am clueless as I am not a database guy.

Note:

Output of

$ lsnrctl services    LSNRCTL for Linux: Version 11.2.0.2.0 - Production on 04-JAN-2013 09:15:37    Copyright (c) 1991, 2011, Oracle.  All rights reserved.    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC_FOR_XE)))    Services Summary...    Service "PLSExtProc" has 1 instance(s).   Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...      Handler(s):        "DEDICATED" established:0 refused:0           LOCAL SERVER  Service "XE" has 1 instance(s).    Instance "XE", status READY, has 1 handler(s) for this service...      Handler(s):        "DEDICATED" established:0 refused:0 state:ready           LOCAL SERVER  The command completed successfully  

Find the first gap in an aggregation of integers in SQL Server

Posted: 15 Mar 2013 02:03 PM PDT

Let's say I have a table called dbo.GroupAssignment.

  GroupID | Rank  ------------------  1    1  1    2  1    3  2    1  2    3  2    4  3    2  3    4  3    5  

The PK is GroupID, Rank. Normally, the ranks within a group are a contiguous sequence of integers starting from 1. But it's possible for a GroupAssignment to get removed, leaving a gap. When a new assignment is made for the group, I want to fill the first gap available.

So, how could I calculate this in SQL Server?

mysql cannot start because of init.d folder?

Posted: 15 Mar 2013 08:27 AM PDT

After I setup mysql 5.5, I changed the data dir to another folder and modify the related config in my.cnf. This is quite simple, but mysql cannot start anymore even if reinstall or not change data dir.

The error log has no useful information, so I cannot find any reason.

111207 16:28:02 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 111207 16:28:02 mysqld_safe mysqld from pid file /var/lib/mysql/localhost.localdomain.pid ended

But if I use the command as the start script, mysql starts normally.

/usr/bin/mysqld_safe --datadir=/data/mysql/ --pid-file=/data/mysql/localhost.localdomain.pid > /dev/null 2>&1

Much more strange, I use /usr/share/mysql/mysql.server start instead of /etc/init.d/mysql start, mysql also can start without error, while these two files are the same. So in the end I have to use cp -l /usr/share/mysql/mysql.server /etc/init.d/mysql to fix the problem.

I really want to know why and if there is a better solution!

One Big Database vs. Several Smaller Ones

Posted: 15 Mar 2013 08:54 AM PDT

We have a situation were we can (A) deploy instances of an applications in one MySQL database using table prefixing or (B) use different MySQL databases for each instance of the application, for e.g.,

Setup "A":

central_database    app1_table1    app1_table2    app1_tablen  ...    appn_table1    appn_table2    appn_tablen  

The end result being a large db with many tables.

Setup "B":

app1_db    table1    table2    tablen    ...    appn_db    table1    table2    tablen  

The end result being many databases with some tables.

All things equal (e.g., amount of data, number of app instances, etc), what are the pros and cons of going with either approach? What would be detrimental to database performance and maintenance? The application is PHP 5 based, run over Apache 2.x, and we're running MySQL 5.x.

Many thanks for your time and thoughts!

Is it possible to mysqldump a subset of a database required to reproduce a query?

Posted: 15 Mar 2013 01:22 PM PDT

Background

I would like to provide the subset of my database required to reproduce a select query. My goal is to make my computational workflow reproducible (as in reproducible research).

Question

Is there a way that I can incorporate this select statement into a script that dumps the queried data into a new database, such that the database could be installed on a new mysql server, and the statement would work with the new database. The new database should not contain records in addition to those that have been used in the query.

Update: For clarification, I am not interested in a csv dump of query results. What I need to be able to do is to dump the database subset so that it can be installed on another machine, and then the query itself can be reproducible (and modifiable with respect to the same dataset).

Example

For example, my analysis might query a subset of data that requires records from multiple (in this example 3) tables:

select table1.id, table1.level, table2.name, table2.level          from table1 join table2 on table1.id = table2.table1_id          join table3 on table3.id = table2.table3_id         where table3.name in ('fee', 'fi', 'fo', 'fum');   

[SQL Server] Obtaining Before & After Record & Missing Record.

[SQL Server] Obtaining Before & After Record & Missing Record.


Obtaining Before & After Record & Missing Record.

Posted: 15 Mar 2013 11:43 AM PDT

I have a database that has two tables. On table is a date table that has all of the possible dates for a year within it. Within this table is a field ("concatenate") that contains a varchar data type specification that contains the date in the following format 201201010510. This represents the years, months, days, hours and minutes. This table is joined to a data table that contains the same varchar field named "concatenate". How do I query the data table to obtain the missing dates as well as the first prior matched date and the following matched date. Listed below is the format of the data.Record Concatenate1 2012010105102 2012010105113 2012010105144 2012010105155 201201010517The query would report201201010511201201010512201201010513201201010514201201010515201201010516201201010517

rows to columns - creating view for SSRS

Posted: 15 Mar 2013 02:01 AM PDT

I think this issue is complex (at least to me) but will try to explain correctly and put some meaningful info in.I am trying to create a view to access from BIDS.The originating table goes like thiswfId created logType activityName agentSource agentName outcome detailsC7 2013-03-14 310 Technical Fred John NULL Pre-Sales - Check Technical DetailsC7 2013-03-15 316 Technical John Fred Reject Performed By JohnC7 2013-03-15 310 TechReject John Fred NULL Originator - Technical RejectionC7 2013-03-16 316 TechReject Fred John NULL Performed By FredC7 2013-03-16 310 Technical Fred John NULL Pre-Sales - Check Technical DetailsC7 2013-03-16 316 Technical John Fred Checked Performed By JohnC7 2013-03-16 310 SalesDirector John Steve NULL Sales Director - Please Approve ProposalC8 2013-03-14 310 Technical Fred John NULL Pre-Sales - Check Technical DetailsC8 2013-03-15 316 Technical John Fred Checked Performed By JohnC8 2013-03-15 310 SalesDirector John Steve NULL Sales Director - Please Approve ProposalI am trying to return the following in the viewworkflowId TechDt TechAgent TechOut TechOutDt TechDur ReSubDt ReSubAgent ReSubOut ReSubOutDt ResubDur SDAgent SDOut SDDurC7 2013-03-14 John Reject 2013-03-15 1 2013-03-16 John Checked 2013-03-16 0 Steve NULL NULLC8 2013-03-14 John Checked 2013-03-15 1 NULL NULL NULL NULL NULL Steve NULL NULL[i]Where Dt = Date, Out = Outcome, Dur = Duration[/i]For this example, assume there can only be one re-submit and the only outcome for SalesDirector is Accept or Reject with no resubmit n the SalesDirector outcome.In words, I am trying to get the duration of each step of the workflow, in the real database there are many possible legs to a workflow and many possible routes but they all follow the same basic principal. I would like to be able to see where the current WorkflowID is, for example C8 and C7 is currently with the Sales Director.I have tried unions and tried inner joins to get the data onto one line by doing an inner join then using "where" in each join to separate out the different steps and get them into columns, my biggest issue is that the second workflow submission is the same as the original submission and I need to identify it as a different one. The secondary issue is I don't know which method would run fastest. The actual database has around 40 000 records per year and around 8 lines per unique workflowId.If this is not explained well, please tell me and I will attempt to make it more clear

Backup databases depending on the size to multiple files

Posted: 15 Mar 2013 06:36 AM PDT

DECLARE @name VARCHAR(50) -- database name DECLARE @path VARCHAR(256) -- path for backup files DECLARE @fileName VARCHAR(256) -- filename for backup DECLARE @fileDate VARCHAR(20) -- used for file name -- specify database backup directorySET @path = 'C:\Backup\' -- specify filename formatSELECT @fileDate = CONVERT(VARCHAR(20),GETDATE(),112) DECLARE db_cursor CURSOR FOR SELECT name FROM master.dbo.sysdatabases WHERE name NOT IN ('master','model','msdb','tempdb') -- exclude these databases OPEN db_cursor FETCH NEXT FROM db_cursor INTO @name WHILE @@FETCH_STATUS = 0 BEGIN SET @fileName = @path + @name + '_' + @fileDate + '.BAK' BACKUP DATABASE @name TO DISK = @fileName FETCH NEXT FROM db_cursor INTO @name END CLOSE db_cursor DEALLOCATE db_cursor

Month Summarised tabular data output from a table

Posted: 15 Mar 2013 03:52 AM PDT

Using SQL how is it possible to get this data transformed in a tabular summary report? data samplesupplier number selection_date document_type0000001 01/07/2012 AB100190000002 01/07/2012 AB200110000001 01/07/2012 FR100890000031 01/08/2012 AB100840000021 01/08/2012 FR100890000001 01/08/2012 FR100890000005 01/09/2012 FR130190000003 01/09/2012 AB150060000010 01/11/2012 FR100190000012 01/11/2012 FR134490000010 01/11/2012 FR133190000011 01/11/2012 AB14215Report output:[quote]Month Name Count of AB-Types Count of FR-Types July 2 1 Aug 1 2 Sept 1 1 Nov 1 3[/quote]The report should be in order of the month I will filter it by Year 2012/2011. Assume there are only AB and FR type documents with different numberings at end (so use of like operator is required)There will be no Zero count months Thank you.

Using an ORDER BY clause with Multiple IF ELSE statments

Posted: 15 Mar 2013 05:01 AM PDT

Posted - 03/15/2013 : 13:58:23 Hello all,I've created a (probably over complex) query that will eventually allow a user to select a value from 3 different variables. @Status, @AcctName, @RptName. Since I'm allowing the end user to multi-select different values from the variables, my query begins to become a bit more complex... For instance:declare @status varchar(150)select @status = 'ERROR,REVIEW,COMPLETE,DELIVERY PREPARED,ALERT'IF (@status = 'ALL'AND @AcctName <> 'ALL' AND @RptName <> 'ALL') BEGINselect * from vw_document_mainWHERE ','+REPLACE(@AcctName,'','')+',' LIKE '%,'+acct_name+',%'AND ','+REPLACE(@RptName,'','')+',' LIKE '%,'+rpt_title+',%'ENDELSEBEGIN.......Since I'd have to allow for customization I have 9 different IF ELSE statements. What I need help with doing is creating an ORDER BY variable for the entire IF ELSE statement. I.E - @ColName = 'system_Status' then the entire IF ELSE statement would Order by System Status... Any idea? I began writing a query...ORDER BY CASE @ColNameWHEN 'system_Status' THEN system_status WHEN 'document_status_code' THEN document_status_codeHowever, I'm not sure where this ORDER BY clause would go into my IF ELSE statement... would it need to be after each individual SELECT statement? Any clarification would help, thanks!!

Entry Level DBA position - SQL Server experience needed

Posted: 15 Mar 2013 12:14 AM PDT

Hello,I have a job opportunity for an Entry Level DBA located in Malvern, PA and they are looking for someone with an IT-related degree as well as some SQL Server and DBA experience. If you are interested please contact me at dana.murk@insightglobal.net.Thank you!-Dana

SSIS - Data Flow Task With Delete Very Slow

Posted: 14 Mar 2013 08:26 PM PDT

I have a Data Flow Task that reads from a Flat File Source (orders.csv) and then does an OLE DB Command to do a delete from a SQL database table if a match is found. This is the SQL command in the OLE DB Command Task: Delete orders from orders with(index(IX_orderno)) WHERE orderno = ? and ordersuf = ? and oetype in ('r','t','d') and cono = ?(The index IX_orderno is defined as non-clustered and is orderno + ordersuf + oetype + cono. )There are 400,000 records in the orders.csv and the delete is extremely slow (over 1 hour.)We do have 8 indexes on this table, all non-clustered. Would this have any ramifications on the deletion speed?Help would greatly appreciated.

[Articles] Inconsistency

[Articles] Inconsistency


Inconsistency

Posted: 14 Mar 2013 11:00 PM PDT

This Friday Steve Jones notes that changing and altering your opinions is good and invites you to share things you might have learned that changed your mind in the past.

[MS SQL Server] The distribution agent failed to create temporary files

[MS SQL Server] The distribution agent failed to create temporary files


The distribution agent failed to create temporary files

Posted: 26 Sep 2012 03:42 AM PDT

HiI'm getting this error with a transactional replication setup on SQL 2008R2 Standard (SP1): The distribution agent failed to create temporary files in 'C:\Program Files\Microsoft SQL Server\100\COM' directory. System returned errorcode 5. (Source: MSSQL_REPL, Error number: MSSQL_REPL21100)I've been to [url]http://support.microsoft.com/kb/956032/en-us[/url] amongst other places, and as a result given full control to Everyone on the C:\Program Files\Microsoft SQL Server\100\COM folder (I tried the SQL Agent and Distribution agent accounts first). I've also disabled on-access scanning for the anti-virus software. It's a 2-node cluster so I've done this for both nodes.Is there anything else I can try in order to troubleshoot further? - only this one has got me completely stumped.Thanks everyone,Duncan

Transact-SQL job step to run under proxy

Posted: 14 Mar 2013 10:35 AM PDT

HelloI have a Transact-SQL job step that I want to run under a proxy however I cannot get it to work. I have created the credential then created the proxy to run under all subsytems however when I try and choose the proxy in the 'run as' drop down it does not appear. I am running sql server 2008 enterprise edition

[SQL 2012] how to create user that can login to create and edit but cannot delete?

[SQL 2012] how to create user that can login to create and edit but cannot delete?


how to create user that can login to create and edit but cannot delete?

Posted: 14 Mar 2013 12:45 PM PDT

Circumstances require a user that can be restricted to login to a specific database to create and edit data entered into tables but cannot delete anything. I presume this type of security has been around awhile but I could not find anything searching around so I ask is it possible and if so I need to learn how.I may also need some help with the T-SQL because the database is being generated by script that was generated by a tool and I will have to modify that generated sql script to create the user and login.

Can a database be part of an AlwaysOn Availability Group and also mirrored to another server?

Posted: 14 Mar 2013 05:39 AM PDT

Hi,If a database is already part of an AlwaysOn Availability Group, can it also be mirrored independently to another target server?In other words, can we have mirroring and AlwaysOn-Avail Groups applied on the same databases?

[T-SQL] SQL Queries for geting common values and uncommon values

[T-SQL] SQL Queries for geting common values and uncommon values


SQL Queries for geting common values and uncommon values

Posted: 15 Mar 2013 12:23 AM PDT

Hi...... I have one table in my database say for example "TableA" and the data is shown below.. ID Value 1 A [b] (common for all ids)[/b] 1 B [b](common for all ids)[/b] 2 A [b](common for all ids)[/b] 2 C 2 B [b](common for all ids)[/b] 3 A [b](common for all ids)[/b] 3 B ([b]common for all ids)[/b] 3 C 3 D My problem is i need two sql queries which will give the following two resultant sets as shown below First sql query result will be like this Common Factors in ID (1,2,3) A B Second query result will be Common Factors in ID (1,2,3) C (here c is not in id 1 so will be listed here) D Can any one please give the two queries please , just i need common factors in one set and uncommon factors in another set ..I need two SQL queries but Thanks and Regards Sankar

Inner join vs Left join

Posted: 14 Mar 2013 07:59 PM PDT

Hi Everyone,I'm experiencing something I don't expect and I'm hoping someone can shed some light. We are using entity framework and LINQ to query our SQL Server 2008 R2 database and it is generating the following (fairly ugly) SQL:[code="sql"]SELECT [Extent1].[ValuationDateID] AS [ValuationDateID], [Extent3].[TransactionRef] AS [TransactionRef], [Extent3].[Counterparty] AS [Counterparty], [Extent2].[BookName] AS [BookName], [Extent2].[Category] AS [Category], [Extent2].[Type] AS [Type], [Extent2].[LifeLicence] AS [LifeLicence], [Extent2].[Manager] AS [Manager], [Extent2].[TaxFund] AS [TaxFund], [Extent2].[HiPortName] AS [HiPortName], [Extent2].[AliasPortfolioName] AS [AliasPortfolioName], [Extent4].[ParallelShift] AS [ParallelShift], [Extent5].[Tenor] AS [Tenor], [Extent1].[Value] AS [Value]FROM (SELECT [fctSwapDualShift].[ValuationDateID] AS [ValuationDateID], [fctSwapDualShift].[Currency] AS [Currency], [fctSwapDualShift].[BookID] AS [BookID], [fctSwapDualShift].[HiPortNameID] AS [HiPortNameID], [fctSwapDualShift].[ContractTypeID] AS [ContractTypeID], [fctSwapDualShift].[TypeID] AS [TypeID], [fctSwapDualShift].[ManagerID] AS [ManagerID], [fctSwapDualShift].[TransactionID] AS [TransactionID], [fctSwapDualShift].[ParallelShiftID] AS [ParallelShiftID], [fctSwapDualShift].[SwapTenorID] AS [SwapTenorID], [fctSwapDualShift].[Value] AS [Value], [fctSwapDualShift].[ClientID] AS [ClientID] FROM [Reporting].[fctSwapDualShift] AS [fctSwapDualShift]) AS [Extent1]INNER JOIN (SELECT [vwAllMappingWithDescriptions].[BookID] AS [BookID], [vwAllMappingWithDescriptions].[ValuationDateID] AS [ValuationDateID], [vwAllMappingWithDescriptions].[BookName] AS [BookName], [vwAllMappingWithDescriptions].[Category] AS [Category], [vwAllMappingWithDescriptions].[LifeLicence] AS [LifeLicence], [vwAllMappingWithDescriptions].[TaxFund] AS [TaxFund], [vwAllMappingWithDescriptions].[Type] AS [Type], [vwAllMappingWithDescriptions].[HiPortName] AS [HiPortName], [vwAllMappingWithDescriptions].[AliasPortfolioName] AS [AliasPortfolioName], [vwAllMappingWithDescriptions].[Manager] AS [Manager], [vwAllMappingWithDescriptions].[RasHiPortCode] AS [RasHiPortCode], [vwAllMappingWithDescriptions].[LiquidityRiskClassification] AS [LiquidityRiskClassification], [vwAllMappingWithDescriptions].[IncludeForRisk] AS [IncludeForRisk] FROM [configuration].[vwAllMappingWithDescriptions] AS [vwAllMappingWithDescriptions]) AS [Extent2] ON [Extent1].[BookID] = [Extent2].[BookID]INNER JOIN [Reporting].[dimTransaction] AS [Extent3] ON [Extent1].[TransactionID] = [Extent3].[ID]INNER JOIN [Reporting].[dimParallelShift] AS [Extent4] ON [Extent1].[ParallelShiftID] = [Extent4].[ID]INNER JOIN [Reporting].[dimSwapTenor] AS [Extent5] ON [Extent1].[SwapTenorID] = [Extent5].[ID]WHERE (20130314 = [Extent1].[ValuationDateID]) AND (20130314 = [Extent2].[ValuationDateID])[/code]Trying to run this query yields:[color="FF0000"]Msg 0, Level 11, State 0, Line 0A severe error occurred on the current command. The results, if any, should be discarded.[/color]However, if I change the INNER JOIN on the Reporting.dimParallelShift to a LEFT JOIN, then the query returns the correct results. Here's the part I don't understand. The ParallelShiftID column on the Reporting.fctSwapDualShift fact table is NOT nullable. Furthermore, the dimParallelShift dimension as a primary key on the ID column and so it is unique. To my mind, in this case, the LEFT join and INNER join are logically equivalent but that obviously isn't the case. What am I missing?

Generate Script

Posted: 14 Mar 2013 06:58 PM PDT

Hi AllIs there anyway I can use the below script to generate CREATE DATABASE commands? [code="sql"]select DB_NAME(smf.database_id)DatabaseName,type_desc,smf.name,physical_name,size ,sd.collation_namefrom sys.master_files smfinner join sys.databases sdon db_name(smf.database_id) = sd.namewhere db_name(smf.database_id) not in ('master','model','tempdb','msdb')order by name[/code]Thanks

DateDiff help to verify my code is correct or not?

Posted: 14 Mar 2013 07:44 AM PDT

Hi Friends,I'm using DateDiff function -my requirement is -1) "SmartLaborII Job Posting End Date"" is NOT < ""SmartLaborII Work Order End Date"and my code is ---------not(datediff(day,[WorkForce_JobPosting].[Job Posting End Date_JP] ,[WorkForce_WorkOrder].[Work Order End Date_WO]) > 0)--------2) "SmartLaborII Work Order End Date"" is NOT greater than ""SmartLaborII Previous Work Order End Date"my code is -/// datediff(day,[WorkForce_WorkOrder].[Work Order End Date_WO] ,[WorkForce_WorkOrder].[Previous Work Order End Date_WO] ) >= 0//Need help from experts to verify if my code for the 2 requirements is correct ?Kind RegardsDhananjay

Get missing records

Posted: 14 Mar 2013 10:36 AM PDT

Hi, I have a table #TransDetailCREATE TABLE #TransDetail( ID INT IDENTITY(1,1) PRIMARY KEY, Number INT, TransDate SMALLDATETIME, TransTime INT, Site VARCHAR(100), Company VARCHAR(15) )INSERT INTO #TransDetail (Number,TransDate,Transtime,site,company)SELECT 1,'12/1/2012',0757,'MyCompany','COMP'UNIONSELECT 2,'12/1/2012',0757,'MyCompany','COMP'UNIONSELECT 3,'12/1/2012',0757,'MyCompany','COMP'UNIONSELECT 5,'12/1/2012',0757,'MyCompany','COMP'UNIONSELECT 1,'12/2/2012',0957,'MyNewCompany','COMPUSA'UNIONSELECT 2,'12/2/2012',0957,'MyNewCompany','COMPUSA'UNIONSELECT 5,'12/2/2012',0957,'MyNewCompany','COMPUSA'I want to to know the missing numbers for a particular TransDate,Site and companyExample For 'MyCompany' , Number 4 is missing. So I want 4,Date and Time of prev number imported , That is '12/1/2012',0757Date and Time of next number imported , That is '12/1/2012',0757,Site (Mycompany)ANDCompany(COMP) returnedSimilarly for 'MyNewCompany' Number 3 and 4 are missingSo I want Nunbers 3 ,4 returned along withDate and Time of prev number imported , That is '12/2/2012',0957Date and Time of next number imported , That is '12/2/2012',0957,Site (MyNewcompany)ANDCompany(COMPUSA) returnedPlease help

delete query SQL 2008 in Virtual machine setup?

Posted: 14 Mar 2013 11:19 PM PDT

Hi,I am going to delete the records for last 2 years, and total number of rows 31935523..Should I go for as below delete statement..column name yyyymm - varchar datatype and nonclusted index already exists..column name date filed - datetime and non-unique nonclusted index already exists[code="sql"]select ep_no,date,yyyymm, schd_shft,dayin, dayout,rtype,stype,etype,stn_in, stn_out,updated_on, inserted_on from CONTRACT_MUSTER where yyyymm between '201001'and ''201006''delete from from CONTRACT_MUSTER where yyyymm between '201001'and ''201006'godelete from from CONTRACT_MUSTER where yyyymm between '201006'and ''201012'go[/code]thanksananda

Populate sql table field descriptions programatically

Posted: 14 Mar 2013 11:49 PM PDT

I am trying to update my sql table field descriptions programatically, but I am getting an error.------------------------------use NCOSGOEXEC sp_updateextendedproperty @name = N'MS_Description', @value = 'Your description',@level0type = N'Schema', @level0name = 'NCOS', @level1type = N'Table', @level1name = 'NCOS_Data', @level2type = N'Column', @level2name = 'NC_Prospect_Name';Error:Server: Msg 15600, Level 15, State 1, Procedure sp_updateextendedproperty, Line 42An invalid parameter or option was specified for procedure 'sp_updateextendedproperty'.

What needs to consider when adding a password to the table

Posted: 14 Mar 2013 07:41 AM PDT

Hi There,Could you please help me out, i have table with 9 columns and about a million rows,i need to add primary key, so what things do i need to consider here,what if my table has no unique column / columns(composite), wat can do in this situationadding a id field (just 1 to .... so on) and then making this pk, does makes any sense?could you please help me out hereThanks a ton in advancedhani

Group and Count from a table

Posted: 14 Mar 2013 03:39 AM PDT

I have a following table and I need to group by each topic and count them.[b]SNo [/b] [b] Category[/b]1 ------- XML/HTML2 ------- ASP.NET3 ------- C#4 ------- ASP.NET/C#5 ------- C#/XML6 ------- HTML/ASP.NET7 ------- SQL8 ------- SQL/HTML9 ------- SQL/XML10 ------- XML11 ------- C#12 ------- ASP.NET13 ------- SQL14 ------- XML15 ------- SQL[b]Expected Result:[/b][b]Category[/b] [b]Count[/b]C# -------- 4XML -------- 5HTML ------- 3ASP.NET --- 4SQL ------- 5

Hierarchy example - Ken Henderson's book - not working

Posted: 23 Feb 2013 10:16 AM PST

I'm working on the hierarchy lesson in Ken Henderson's The Guru's Guide to Transact-SQL. He wants to show two things. First, how to write a query that populates a temporary table (#org_chart) with all possible chartdepth levels detected in the staff table. Secondly, how to write a query that shows all the boss/subordinate relationships regardless of the levels by which they are removed from each other. But, I am having trouble with the first part, the query for building the #org_chart table. I've written it below, verbatum from book. It is syntactically correctly but when I run it adds no subsequent rows to the #org_chart table. Will someone tell me what is missing? Run in this order, the DDL, query 1, and query 2, should produce a table with 40+ rows:[code="plain"] --DDL for creating and populating staff tablecreate table staff(employee int primary key, employee_name varchar(10),supervisor int null references staff (employee))insert staff values (1, 'groucho', 1),(2, 'chico', 1),(3, 'harpo', 2),(4, 'zeppo', 2),(5, 'moe', 1),(6, 'larry', 5),(7, 'curly', 5),(8, 'shemp', 5),(9, 'joe', 8),(10, 'curly joe', 9);--query #1--establishes first 10 rows of table, with first level of chart depthselect chartdepth=1, employee=o2.employee, supervisor=o1.employeeinto #org_chartfrom staff o1 inner join staff o2 on (o1.employee=o1.supervisor)--query #2 does not work for me--it is supposed to populate #org_chart with remaining levels of chartdepth. --A while loop repeats the insert as many times as necessary to process all levels in staff tablewhile (@@rowcount > 0) begininsert #org_chart (chartdepth, employee, supervisor)select distinct o1.chartdepth+1, o2.employee, o1.supervisorfrom #org_chart o1 join #org_chart o2 on (o1.employee=o2.supervisor)where o1.chartdepth=(select max(chartdepth) from #org_chart)and o1.supervisor<>o1.employeeend--if following the execution of query #1 you run the below query about 4-5 times, it accomplishes what query #2 is supposed to be able to --accomplish with a While loopinsert into #org_chartselect distinct o1.chartdepth+1, o2.employee, o1.supervisorfrom #org_chart o1 join #org_chart o2 on (o1.employee=o2.supervisor)where o1.chartdepth=(select max(chartdepth) from #org_chart)[/code]Why is query #2 not working? Thanks.

[SQL Server 2008 issues] How to Pivot table?

[SQL Server 2008 issues] How to Pivot table?


How to Pivot table?

Posted: 14 Mar 2013 06:58 PM PDT

SET ANSI_NULLS ONGOSET QUOTED_IDENTIFIER ONGOSET ANSI_PADDING ONGOCREATE TABLE [dbo].[sales_report]( [salesid] [bigint] NOT NULL, [productName] [varchar](100) NULL, [Qty] [bigint] NULL, [Date] [date] NULL, CONSTRAINT [PK_sales_report] PRIMARY KEY CLUSTERED ( [salesid] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY]GOSET ANSI_PADDING OFFGOINSERT [dbo].[sales_report] ([salesid], [productName], [Qty], [Date]) VALUES (1, N'dettol', 6, CAST(0xDD360B00 AS Date))INSERT [dbo].[sales_report] ([salesid], [productName], [Qty], [Date]) VALUES (2, N'hamam', 10, CAST(0xDE360B00 AS Date))INSERT [dbo].[sales_report] ([salesid], [productName], [Qty], [Date]) VALUES (3, N'sweet', 10, CAST(0xDF360B00 AS Date))INSERT [dbo].[sales_report] ([salesid], [productName], [Qty], [Date]) VALUES (4, N'test', 23, CAST(0xDC360B00 AS Date))INSERT [dbo].[sales_report] ([salesid], [productName], [Qty], [Date]) VALUES (5, N'boost', 10, CAST(0xDF360B00 AS Date))INSERT [dbo].[sales_report] ([salesid], [productName], [Qty], [Date]) VALUES (6, N'horlicks', 2, CAST(0xDD360B00 AS Date))INSERT [dbo].[sales_report] ([salesid], [productName], [Qty], [Date]) VALUES (7, N'shampo', 5, CAST(0xDE360B00 AS Date))here i want ProductName14 15 16 17 Total---------------------------------dettol 0 6 0 0 6hamam 0 0 10 0 10boost 0 0 0 10 10horlicks 0 2 0 0 2shampo 0 0 5 0 5sweet 0 0 0 10 10test 23 0 0 0 23

date format 7/31/2013

Posted: 04 Mar 2013 03:52 AM PST

Hi friends,Im new to TSql -I have requirement to display date as 7/31/2013 but currently I'm getting it from DB as 2013-03-01 18:40:24.000.any help on this will he highly appreciated,ThanksDhananjay

how to split the days between two month

Posted: 13 Mar 2013 11:19 PM PDT

hi i am developing a web application dot net..i am struck with the sql server query...if the person is taking a leave from 25/3/2013 to 3/02/2013,i want to split the 6days in month and 3 days in febrarury,,,can anyone tell me pls

can we install 2008 R2 integration services and 2012 integration services on the same server

Posted: 14 Mar 2013 09:59 AM PDT

Hi DBA'SI had 2012 integration services installed on a server. i am trying to connect to Oracle but it is failing. I worked with Same SSIS Pkg in 2008 r2 where i can connect to oracle and import data. So i am planning to install sql server 2008 r2 ssis on the same box where 2012 is installed. My Database engine will be 2012. can i do this ?please adviseThanks a alot in adavnce

table Joins and overby clause

Posted: 14 Mar 2013 05:25 PM PDT

Hi,Please find below query,tab1 - Col1 int, Col2 varchar2(30)tab2 - Col1 int, col2 varchar2(30) , col3 datetimeQuery 1:select a.col1,a.col2,b.col3 from tab1 afull outer join tab2 bon a.col1=b.col1and b.col3 > '19000101'Query 2:select a.col1,a.col2,b.col3from tab1 afull outer join (select col1,col2,col3, row_number() over(partition by col1,col3 order by col3) as rownumber from tab2) bon a.col1=b.col1and b.col3 > '19000101'Please see the above two queries, Issue : I am getting row difference between this two queries. for example while executing first query , getting output 3 rows.while executing second query , getting output 5 rows.Can you please explain me?

Query to find cheapest supplier when multiple suppliers?

Posted: 14 Mar 2013 06:38 AM PDT

Hi I have the following table (created from a view):BaseSupplierID__SupplierID___ProductID__Price3_______________3___________1________113_______________4___________1________113_______________5___________1________153_______________6___________1________104_______________3___________2________164_______________4___________2________105_______________5___________3________165_______________8___________3________145_______________9___________3________10Basically each product can be supplied from multiple supplies yet there is a base supplier (which in theory should be the cheapest). I need to create a query that would show me the products where the base supplier was not the cheapest for their products? Any pointers?? Thanks

DB restored from customer -> Database diagram vanishes

Posted: 14 Mar 2013 02:55 AM PDT

Hello,I use SS2008R2 Dev.Edition and altered a DB from my customer. Then I send him my DB-backup, the customer synchronizes the db-schema with SQLCompare. So far, so good. All works. Now the customers production DB is sent back to me and I restored this one. But the Database diagram I created has vanished.:w00t:I suppose that has to do with user rights.At my PC I always login as user with sysadmin rights/windows security. The customer uses windows security AND some SQL User logins and I think thats the issue. Whats the right way to assure that any DB-backup I receive from my customer also returns the Database diagram?thx

Slow Cascade Stored Procedure & Hang

Posted: 14 Mar 2013 06:11 AM PDT

Table Image : http://tinypic.com/r/1075g6v/6So I have this query which searches id by id and calculates the cost accordingly. But is very slow and I would like to understand how I could convert it into a set based operation.So depending on our condition we calculate our modeled costs differently.When user updates a driver we can run a update on the entire driver column based on where it has changed.But when it comes to calculating the modeled cost. We do it row by row as the fixed cost differs and then divide by months. I have pasted the code below. Is there still a way to this by set based operation ?First we update the drivers in the same table depending on what value has changed using an update and then update the modeled cost row by row (which is really slow)Code :SELECT @rowCounter = 1, @totalrows = @@ROWCOUNTWHILE @rowCounter <= @totalrows BEGIN SELECT @currentId = tempId FROM @temp WHERE row = @rowCounter SELECT @newModeledCost = case when not exists (select 1 from dbo.DIMSTD_SCENARIO where SCENARIO0_Name = SCENARIO and SCENARIO2_Name = 'Model') then ISNULL(DriverValue1,0)*ISNULL(DriverValue2,0)*ISNULL(UnitA,0)*ISNULL(UnitB,0)+ISNULL(FixedCost,0) -- normal allocation for all scenarios else (ISNULL(unita,0) * (ISNULL(DriverValue1,0)/ISNULL(NULLIF(DriverValue2,0),1))* ISNULL(UnitB,0))+ISNULL(FixedCost,0) --(ISNULL(unita,0) * (ISNULL(DriverValue1,0)/ISNULL(DriverValue2,0))*ISNULL(UnitB,0))+ISNULL(FixedCost,0) -- allocation for model scenarios end , @oldModeledCost = ISNULL(ModeledCost,0), @newOct = (ISNULL(@newModeledCost,0) * (ISNULL(Oct, 0) /ISNULL(NULLIF(@oldModeledCost,0),1))), @newNov = (ISNULL(@newModeledCost,0) * (ISNULL(Nov, 0) /ISNULL(NULLIF(@oldModeledCost,0),1))), @newDec = (ISNULL(@newModeledCost,0) * (ISNULL(Dec, 0) /ISNULL(NULLIF(@oldModeledCost,0),1))), @newJan = (ISNULL(@newModeledCost,0) * (ISNULL(Jan, 0) /ISNULL(NULLIF(@oldModeledCost,0),1))), @newFeb = (ISNULL(@newModeledCost,0) * (ISNULL(Feb, 0) /ISNULL(NULLIF(@oldModeledCost,0),1))), @newMar = (ISNULL(@newModeledCost,0) * (ISNULL(Mar, 0) /ISNULL(NULLIF(@oldModeledCost,0),1))), @newApr = (ISNULL(@newModeledCost,0) * (ISNULL(Apr, 0) /ISNULL(NULLIF(@oldModeledCost,0),1))), @newMay = (ISNULL(@newModeledCost,0) * (ISNULL(May, 0) /ISNULL(NULLIF(@oldModeledCost,0),1))), @newJun = (ISNULL(@newModeledCost,0) * (ISNULL(Jun, 0) /ISNULL(NULLIF(@oldModeledCost,0),1))), @newJul = (ISNULL(@newModeledCost,0) * (ISNULL(Jul, 0) /ISNULL(NULLIF(@oldModeledCost,0),1))), @newAug = (ISNULL(@newModeledCost,0) * (ISNULL(Aug, 0) /ISNULL(NULLIF(@oldModeledCost,0),1))), @newSep = (ISNULL(@newModeledCost,0) * (ISNULL(Sep, 0) /ISNULL(NULLIF(@oldModeledCost,0),1))) FROM dbo.TBF_BUDGETExpenses WHERE BudgetId = @currentId --and not exists (select 1 from dbo.DIMSTD_SCENARIO where SCENARIO0_Name = SCENARIO and SCENARIO2_Name = 'Model') UPDATE dbo.TBF_BUDGETExpenses SET ModeledCost = @newModeledCost, Oct = @newOct, Nov = @newNov, Dec = @newDec, Jan = @newJan, Feb = @newFeb, Mar = @newMar, Apr = @newApr, May = @newMay, Jun = @newJun, Jul = @newJul, Aug = @newAug, Sep = @newSep, Username = 'Cascade', lastmodified = getdate() WHERE BudgetId = @currentId AND @oldModeledCost <> 0 Print 'Record Update ' + CAST(@currentId AS VARCHAR(15)) SET @rowCounter = @rowCounter + 1END

Test connection succesfull with Import Export Wizard but fails in BIDS 2012 - Oracle to SQL

Posted: 14 Mar 2013 06:17 AM PDT

Hello All,Test connection succesfull with Import Export Wizard but fails in SSIS Pkg in BIDS 2012 Can you please help me in debugging this issue. Attached screenshot of error in SQL 2012[img]http://www.sqlservercentral.com/Forums/Attachment13333.aspx[/img]

Distribution cleanup failed

Posted: 14 Mar 2013 07:05 AM PDT

What happens if the Distribution Cleanup Maintenance job failes before the next replication cycle? Does it push duplicates over? If it succeeds after the next replication cycle does it clean up everything that is missed from it's last failure? Is this really anything to worry about? It's not a constant failure but it fails every now and then.

Where is my execution plan?

Posted: 14 Mar 2013 02:27 AM PDT

I am trying to get the execution plan for a very slow procedure that I have, and I can't. I am able to list the child queries and IO utilization, but my DMV can't display the graphical plan. Initially, I though was memory pressure on my PRO environment, but this is also happening on my test box, which it is totally isolated.Here's my DMV, taken from "SQL Server DMVs In Action" book, wonderful book, by the way:[code="sql"]SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTEDSELECT TOP 100[Total IO] = (qs.total_logical_reads + qs.total_logical_writes), [Average IO] = (qs.total_logical_reads + qs.total_logical_writes) /qs.execution_count, qs.execution_count, SUBSTRING (qt.text,(qs.statement_start_offset/2) + 1,((CASE WHEN qs.statement_end_offset = -1THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2ELSE qs.statement_end_offsetEND - qs.statement_start_offset)/2) + 1) AS [Individual Query], qt.text AS [Parent Query], DB_NAME(qt.dbid) AS DatabaseName, qp.query_planFROM sys.dm_exec_query_stats qsCROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as qtCROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qpWHERE DB_NAME(qt.dbid) ='db_name' AND qt.text LIKE '%sproc_name%'ORDER BY [Total IO] DESC, qs.execution_count DESC[/code]The query_plan column displays NULL.The store procedure uses temporary tables, not variable tables, but I do not think that's relevant anyway.

Need Help with DateDiff function

Posted: 14 Mar 2013 07:43 AM PDT

Hi Friends,I'm using DateDiff function -my requirement is -1) "SmartLaborII Job Posting End Date"" is NOT < ""SmartLaborII Work Order End Date"and my code is ---------not(datediff(day,[WorkForce_JobPosting].[Job Posting End Date_JP] ,[WorkForce_WorkOrder].[Work Order End Date_WO]) > 0)--------2) "SmartLaborII Work Order End Date"" is NOT greater than ""SmartLaborII Previous Work Order End Date"my code is -/// datediff(day,[WorkForce_WorkOrder].[Work Order End Date_WO] ,[WorkForce_WorkOrder].[Previous Work Order End Date_WO] ) >= 0//Need help from experts to verify if my code for the 2 requirements is correct ?Kind RegardsDhananjay

adding a new not null bit column with default 0

Posted: 13 Mar 2013 11:32 PM PDT

I need to add a new bit column to a table with around 6 million rows.I know the table will be locked during this operation. My question is: Is there a quick way of doing this without copying the data into a new table ,re-applying indexes extra and then doing a table name change?Thanks

How to get Below T-SQL qury Output..?

Posted: 08 Mar 2013 05:23 PM PST

[size="3"][font="Courier New"]Hi All,This is the query I have writtenDECLARE @FromDate DATETIMEDECLARE @EndDate DATETIMESET @FromDate = '2013-01-01 00:00:00.000'SET @EndDate = '2013-02-13 00:00:00.000'SELECT year(sd.FKDAT) As YEARWISE_DATA, sg.KUNNR As PARTY, sg.NAME1 As NAME, SUM(sd.FKIMG) As QUANTITY, SUM(sd.NETWR) As VALUE_IN_FC, SUM(sd.NTGEW) As WEIGHTFROM Sales_group sg WITH(NOLOCK)INNER JOIN SALES_DATA sd WITH(NOLOCK) ON sg.KUNNR = sd.KUNAGWHERE sd.FKDAT >= @FromDate AND sd.FKDAT <= @EndDateGROUP By sd.FKDAT, sg.KUNNR, sg.NAME1ORDER By 1, sg.KUNNR ASC[b]Below is the output i am getting,[/b]2013 HA010 ADK 360.000 36988.20 9206.4342013 HA010 ADK 205.000 31363.80 9299.8482013 HA018 AGRI 295.000 42646.25 12578.1492013 HA018 AGRI 119.000 29587.75 8816.1122013 HA018 AGRI 21.000 10289.65 2882.4882013 HA018 AGRI 249.000 57764.20 17605.415[b]Required Output I want[/b]2013 HA010 ADK 565.000 68352.00 18506.312013 HA018 AGRI 684.000 140287.85 41882.164Thanks & Regards,Bhushan[/font][/size]

Capture Data Change

Posted: 14 Mar 2013 05:18 AM PDT

I'm trying to use CDC for auditing certain tables, the problem I find is it doesn't capture the user.I have tried using Default Constraints SUSNAME_USER and adding an extra column to the cdc table. But it puts sa into the field each time is there another way to do this using CDC. A part from changing the user table to capture this. I can't see another option.

Internal Query Processor Error

Posted: 14 Mar 2013 03:37 AM PDT

Any idea on this...Msg 8624, Level 16, State 21, Line 1Internal Query Processor Error: The query processor could not produce a query plan. For more information, contact Customer Support Services.we are on SQL Server 2008 R2.Thanks

Import Access 2010 db to SQL Server

Posted: 14 Mar 2013 04:32 AM PDT

I am trying to import a .accdb file into SQL Server but do not see any option in SQL Server Import Wizard. There is an option of importing a .mdb file but not .accdb file.Does anyone know how to import such a file to SQL 2008?ThanksKK

fetch tables most often queried

Posted: 26 Feb 2013 12:24 PM PST

Is there a way to determine which tables of a database are being queried most often, sorted in descending order?

encryptbypassphrase and encrypting by using symmetric key

Posted: 14 Mar 2013 12:45 AM PDT

I wanted to know some details about sql server encryptiona) What is the exact difference between encryptbypassphrase and encrypting using symmetric key which is protected by a password(while creating symmetric key it is encypted by using password not master key or certificate)b)Which one is more secure in those two?c) If i use the second method mentioned in question a) i.e encrypting using symmetric key which is protected by a password,then in any case after about a year if i want to change password then how it can be easily done?

Websockets?

Posted: 13 Mar 2013 11:54 PM PDT

Anyone has experience with SQLServer and websockets? I need my Stored Procedure to be able to send a message to clients through nodeJS.Thanks,

SQLSERVER UNINSTALL PREVIOUS & INSTALL EXPRESS EDITION

Posted: 14 Mar 2013 04:06 AM PDT

Hi All,My 2008 r2 eval copy has expired. I am wanting to uninstall it and install the express version. I've already uninstalled some SQL Server 2008 programs and deleted some directories. When I try to uninstall the Microsoft SQL Server 2008 R2 setup(English) program, this is what I get: Warning 26003: SQL Server 2008 R2 Setup Support Files cannot be uninstalled because the following products are installed:SQL Server 2008 R2 Common FilesSQL Server 2008 R2 Database Engine Shared After clicking OK, I get this:"The feature you are trying to use is on a network resource that is unavailable""Click OK to try again, or enter an alternate path to a folder containing the installation package 'sqlsupport.msi' in the box below (c:\97c8055e113829e172976f0a9df07e57\1033_ENU_LP\x86\setup\sqlsupport_msi\)Now the other problem is I can't install the 2008 R2 Express version. It goes through the a few things then just sits there.How hosed am I? Any help would be appreciated.

SQL Server for Content Censorship

Posted: 05 Mar 2013 06:08 PM PST

Hi Champs,We have little strange requirement in a case of SharePoint where backend is SQL. Users in social features of SharePoint shout not be able use offensive words in comments, newsfeeds, conversations, reply etc, The system must not allow "offence words". Somehow it is not possible SharePoint automatically. We can do it only manually through content approval process. But that approach is not practical approach for the larger environment. What i was thinking if something can be done from the SQL Side as SharePoint stores Data in SQL. I was thinking something at runtime detection of Offensive words against a Master Table (Offensive word Dictionary) or While user post (Saved) also fine so SQL can compare and if found offensive Data it should tgriger alert for Admin. Please champs let me know if this approach is possible some how. Thanks.Regards,Inder.

Upgrading SQL Server

Posted: 14 Mar 2013 03:41 AM PDT

Hi DBA We are upgrading SQL Server 2005 Box to 2012(Side by Side), for planning new server hardware requirements. we want some useful information from current 2005 box. Can you please guys let me know what all information should we consider from current box(like activity, pressure,)while creating a new box.Please let me know if you have any query which brings out all useful informationThanks a lot in adavnce!

a transaction is beeing rolled back for several days

Posted: 13 Mar 2013 08:55 PM PDT

The Situation is as follows:1. server Sql 2008 r2 (10.50.1600)2. data base Competablitiy Level 2000 (80)a job was stopped and the process is killed The transaction is beeing rolled back for serveral days nowThe amount of processing made by the job does not justify a rollback of more than an hourThe Table in which the data is beeing rolled back is expendable and can be droped or truncated.Kill The spid of the rolled back transaction displays this message -:SPID 173: transaction rollback in progress. Estimated rollback completion: 0%. Estimated time remaining: 0 seconds.:hehe: how can I drop the table .:hehe:

Encryption doubts

Posted: 11 Mar 2013 07:21 PM PDT

I want to encrypt my database. So which method is simple and the best. If i use symmetric key then does i encrypt it by using database master key or password? Which is safe here?

SQL migration - Database 90 to 100

Posted: 13 Mar 2013 10:05 PM PDT

After restoring 2005 db on SQL 2008 Do we have an list of impact when we change from 90 to 100

Clearing/deleting/resetting asynchronous file target - Extended events

Posted: 11 Mar 2013 11:55 PM PDT

Does anyone have any suggestions on how to clear/delete/reset the asynchronous file target files for extended events?

Search This Blog