Friday, July 12, 2013

[how to] Altering the location of Oracle-Suggested Backup

[how to] Altering the location of Oracle-Suggested Backup


Altering the location of Oracle-Suggested Backup

Posted: 12 Jul 2013 08:29 PM PDT

On one database, the Oracle-Suggested Backup scheduled from Enterprise Manager always ends up in the recovery area, despite RMAN configuration showing that device type disk format points elsewhere.

As far as I can see, the scheduled backup job is simply:

run {  allocate channel oem_disk_backup device type disk;  recover copy of database with tag 'ORA_OEM_LEVEL_0';  backup incremental level 1 cumulative  copies=1 for recover of copy with tag 'ORA_OEM_LEVEL_0' database;  }  

Asking RMAN to show all reveals that device type disk is indeed configured to store elsewhere:

CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT   '/s01/backup/PROD11/PROD11_%U';  

If I run the script manually, the backupset is placed at the above location, when the script is run from the job scheduler the backupset goes to the RECO group on ASM,

Why might Oracle still choose to dump the backupset to the db_recovery_file_dest?

Ultimately, how can I change the backup destination?

How do I safely import MySQL data in a directory to a running MariaDB server?

Posted: 12 Jul 2013 08:15 PM PDT

We're integrating another office into ours. That includes some of their data.

I have a new installed MariaDB 5.5 server. The data dir = /home/data/MariaDB. I've done a new/fresh install into that directory:

ls -1 /home/data/MariaDB      ibdata1      ib_logfile0      ib_logfile1      multi-master.info      mysql/      mysql_upgrade_info      performance_schema/      test/  

The server is up & running.

I inherited some data from a now-defunct MySQL server. It's in:

ls -1 /home/data/MySQL-OLD      auto.cnf      ibdata1      ib_logfile0      ib_logfile1      mysql/      mysql_upgrade_info      performance_schema/      proj_DB_2136/      proj_DB_4216/      proj_DB_6217/      test/  

I want to move data for three DBs -- "proj_DB_2136/", "proj_DB_4216/", and "proj_DB_6217/" -- into my working MariaDB server. Same names, same data, everything.

I just need to do this one time, so I can start working on the old data on my new server.

I'd like to do it the right way, without losing anything!

mysqldump makes sense for me, but only for working databases already IN the running server.

What's the correct way to get this data safely brought over?

MySQL Replication - Used to work, now get error 2026

Posted: 12 Jul 2013 07:00 PM PDT

I've had replication running now for about 2 years, with 1 master and 2 slaves. Both are through an SSL connection. I'm using MySQL 5.5.15 on the master (CentOS), and MySQL 5.14.46 on the slaves (on a Windows machine - don't ask).

Last week both slaves stopped replicating off the master, giving me an error code of 2026 and saying that they can't connect to the master. Looking in the error logs, both stopped being able to connect to the server at the same time - 16:46 in the afternoon.

I was here, I'm the only system admin and I was fiddling with anything on the server. In fact, it has run smoothly now for a long time. The SSL certificates appear to still be set up correctly in the master.

Has anybody else had a problem like this?

Creating a database redundancy based on Mysql

Posted: 12 Jul 2013 06:13 PM PDT

I have this requirement where my web application running on a tomcat is behind a loadbalancer and I am planning to deploy this application on multiple servers.

This webapp though needs a database ( mysql ) for managing profile and other similar stuff, now i can only have a single master database and I want all my front end servers to talk to this database. obviously if I do this it becomes my single point of failure , hence the need to run a redundancy/ dynamic failover.

Can you please guide me on same. my requirement though becomes a little more complicated something which i couldn't find on available articles is how to set up a connection to this cluster. Below is the example of how the server.xml is configured in my environment when it talks to single DB

=========================================================================================================================

    <Resource name="jdbc/master" auth="Container" type="javax.sql.DataSource" maxActive="10"              maxIdle="10" validationQuery="SELECT 1" testOnBorrow="false" testWhileIdle="true" validationQueryTimeout="5"              timeBetweenEvictionRunsMillis="10000" minEvictableIdleTimeMillis="60000" maxWait="10000"              username="XXX" password="XXXXXX" driverClassName="com.mysql.jdbc.Driver" removeAbandoned="true"  

removeAbandonedTimeout="60" logAbandoned="true" **url="jdbc:mysql://localhost:53306/master?autoReconnect=true" />**

Now only thing I can replace is the URL but how is the question. and what should be the localhost pointing to.

Really appreciate any response/suggestions here. please feel free to ask me any information you feel i haven't supplied , suggesting an alternative approach to my problem is equally welcome but please note i cant change the webapp as it is a 3rd party application.

Database update rather than cycle through?

Posted: 12 Jul 2013 06:32 PM PDT

I'm new when it comes to MySQLi and databases so I'm not sure how exactly to title this question.

I have a PHP script setup to where it inserts data into the database. Pretty self explanatory. However, I noticed that the database runs through each row instead of just inserting the data. Possibly to check for duplicates? Wouldn't it be much faster not having it query through each row by just inserting the data? The data I'm inserting will never be duplicated anyway unless I force it.

How can I just get it to insert the data without it running through each value in the row? My apologies if this is off-topic.

$SQL = "INSERT INTO `read`(`title`,`url`) VALUES\n ".implode( "\n,",array_reverse( $sql_values ) );  

How do I get the aggregate of a window function in Postgres?

Posted: 12 Jul 2013 08:59 PM PDT

I have a table containing a two columns of permutations/combinations of integer arrays, and a third column containing a value, like so:

CREATE TABLE foo  (    perm integer[] NOT NULL,    combo integer[] NOT NULL,    value numeric NOT NULL DEFAULT 0  );  INSERT INTO foo  VALUES  ( '{3,1,2}', '{1,2,3}', '1.1400' ),  ( '{3,1,2}', '{1,2,3}', '0' ),  ( '{3,1,2}', '{1,2,3}', '1.2680' ),  ( '{3,1,2}', '{1,2,3}', '0' ),  ( '{3,1,2}', '{1,2,3}', '1.2680' ),  ( '{3,1,2}', '{1,2,3}', '0' ),  ( '{3,1,2}', '{1,2,3}', '0' ),  ( '{3,1,2}', '{1,2,3}', '1.2680' ),  ( '{3,1,2}', '{1,2,3}', '0.9280' ),  ( '{3,1,2}', '{1,2,3}', '0' ),  ( '{3,1,2}', '{1,2,3}', '1.2680' ),  ( '{3,1,2}', '{1,2,3}', '0' ),  ( '{3,1,2}', '{1,2,3}', '0' ),  ( '{3,1,2}', '{1,2,3}', '1.2680' ),  ( '{3,1,2}', '{1,2,3}', '0' ),  ( '{3,2,1}', '{1,2,3}', '0' ),  ( '{3,2,1}', '{1,2,3}', '0.8000' )  

I want to find out the average and standard deviation for each permutation, as well as for each combination. I can do that with this query:

SELECT    f1.perm,    f2.combo,    f1.perm_average_value,    f2.combo_average_value,    f1.perm_stddev,    f2.combo_stddev,    f1.perm_count,    f2.combo_count  FROM  (    SELECT      perm,      combo,      avg( value ) AS perm_average_value,      stddev_pop( value ) AS perm_stddev,      count( * ) AS perm_count    FROM foo    GROUP BY perm, combo  ) AS f1  JOIN  (    SELECT      combo,      avg( value ) AS combo_average_value,      stddev_pop( value ) AS combo_stddev,      count( * ) AS combo_count    FROM foo    GROUP BY combo  ) AS f2 ON ( f1.combo = f2.combo );  

However, that query can get pretty slow when I have a lot of data, because the "foo" table (which in reality, consists of 14 partitions each with roughly 4 million rows) needs to be scanned twice.

Recently, I learned that Postgres supports "Window Functions", which is basically like a GROUP BY for a particular column. I modified my query to use these like so:

SELECT    perm,    combo,    avg( value ) as perm_average_value,    avg( avg( value ) ) over w_combo AS combo_average_value,    stddev_pop( value ) as perm_stddev,    stddev_pop( avg( value ) ) over w_combo as combo_stddev,    count( * ) as perm_count,    sum( count( * ) ) over w_combo AS combo_count  FROM foo  GROUP BY perm, combo  WINDOW w_combo AS ( PARTITION BY combo );  

While this works for the "combo_count" column, the "combo_average_value" and "combo_stddev" columns are no longer accurate. It appears that the average is being taken for each permutation, and then being averaged a second time for each combination, which is incorrect.

How can I fix this? Can window functions even be used as an optimization here?

mysql works, mysqldump doesn't

Posted: 12 Jul 2013 04:23 PM PDT

I have mysql 5.5 on my ubuntu 12.04 server. This command:

mysql -u root -p  

works perfectly, but this gives me error:

mysqldump -u root -p mydb_name > a.sql  mysqldump: Got error: 2002: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) when trying to connect  

In my.cnf file, the socket is set to:

socket = /home/mysql/mysql.sock  

so I have no idea where /var/run/mysqld/mysqld.sock is coming from. Thanks.

Optimizing PostgreSQL for transient data

Posted: 12 Jul 2013 08:58 PM PDT

I have several tables with 100-300 columns of integer types each, that hold highly volatile data. The datasets are keyed by one or two primary keys, and when refresh occurs, the whole dataset is deleted and new data is inserted in one transaction. Dataset size is usually a few hundred rows, but can be up to several thousand rows in extreme cases. Refresh occurs once per second, and dataset updates for different keys are usually disjointed, so dropping and recreating the table is not feasible.

How do I tune Postgres to handle such load? I can use the latest and greatest version if that makes any difference.

Which one is more efficient: select from linked server or insert into linked server?

Posted: 12 Jul 2013 08:17 PM PDT

Suppose I have to export data from one server to another (through linked servers). Which statement will be more efficient?

Executing in source server:

INSERT INTO [DestinationLinkedServer].[DestinationDB].[dbo].[Table]  SELECT a, b, c, ... FROM [dbo].Udf_GetExportData()  

Or executing in target server:

INSERT INTO [dbo].[Table]  SELECT a, b, c, ...  FROM OPENQUERY([OriginLinkedServer],      'SELECT a, b, c, ... FROM [OriginDB].[dbo].Udf_GetExportData()')  

Which one will be faster and consume fewer resourcers in total (both source and target server)? Both servers are SQL Server 2005.

Count Most Recent Browser Used By All Users

Posted: 12 Jul 2013 03:43 PM PDT

I have a table with columns Type, Login, and Timestamp. It has been collecting information about the type of browsers users log into the system with for years. Obviously many of the users have since upgraded their browsers. I do not want to consider data for old browsers when users have upgraded. Here is the query I currently have to get a count of how many people use which browser:

select     Type,     count(Type) as Count   from     (select       distinct Login,       Type      from BROWSER) as q   group by Type  

This appears to work fine, but I have a lot of old browsers that I know are no longer being used:

yucky - old browsers no one uses anymore

How would I only select the most recent browser used by each user based on timestamp (or Browser_ID could work also since they are written sequentially). I also intend to ignore data by users who have not logged in during the past year, but that is simple to add later (I only mention it so that no one brings up why I would want browser data that is old for inactive users - I do plan on addressing that as well but I know how to do that already).

If anyone is up for a challenge, there are some users who log in from a couple different computers - or perhaps from their phone and from their desk. It would be great if I could determine that they frequently log in from two different places and also collect that data, but this might be too complicated. If anyone figured out how to add this complicated step, I'll gladly award a 50 point bounty to the answer.

compilation error while referencing pl/sql variable in FOR loop of a procedure

Posted: 12 Jul 2013 02:32 PM PDT

I have written a PL/SQL procedure to get the output in a file for each sale_location, but I am getting an errors as follows:

create or replace   PROCEDURE Rekey_Report_allcntries(sale_loc number)  is    v_outfile UTL_FILE.FILE_TYPE;  v_outbuffer varchar2(1000);  v_outbuffer1 varchar2(200);  v_tempbuffer varchar2(1000);    Begin    v_outfile:=UTL_FILE.FOPEN('INOUT_DIR','Rekey_report.txt','w');    UTL_FILE.PUT_LINE(v_outfile,'Upc,Description,sale_location_id,Keycat,Keycat_name,Maintained_flag,Approval_code,Eff_date,Create_user_id,responsible_party_name,User_Status,Count_of_keycats,Count_of_actions,Original_Keycat_Suggestion');  for c_lang_id in(Select default_language_id from sale_location where sale_location_type_id=1 and sale_location_id=sale_loc)  LOOP  for i in (select e1.upc_id upc_id,upc_description,sale_location_id,keycat_id,xlong_name,maintained_flag,approval_code,eff_date,e1.create_user_id current_user_id,  responsible_party_name,user_status,i1.create_user_id Original_Keycat_Suggestion   from   (select ik.upc_id, replace(it.description,',','') upc_description,ik.sale_location_id,ik.keycat_id, k.xlong_name, k.maintained_flag, ik.approval_code, ik.eff_date,  ik.create_user_id, rp.responsible_party_name, decode(rp.exp_date,null,'Active','Inactive') USER_STATUS  FROM item_keycat ik,keycat k,responsible_party rp,item it  where ik.sale_location_id= sale_loc  and it.sale_location_id = sale_loc   and it.upc_id = ik.upc_id  and ik.keycat_id=k.keycat_id   and k.language_id=c_lang_id.default_language_id  and k.sale_location_id=sale_loc  and rp.default_sale_location_id = sale_loc  and ik.create_user_id = rp.user_id   and rp.exp_date is null   and ik.create_user_id like 'IS%'   and ik.create_user_id (+)= rp.user_id   and ik.eff_date>=trunc(sysdate-365)  and ik.upc_id <> 0  and not exists (select 'x' from item_state ist             where ist.upc_id = it.upc_id             and   ist.sale_location_id = sale_loc             and   ist.state_id = 'COMPLETE'             and   ist.eff_date <= trunc(sysdate) - 7             and   ist.exp_date is null             )     )e1,(select k.upc_id,item_keycat_id,create_user_id,k.sale_location_id  from item_keycat k,   (select upc_id,sale_location_id, min(item_keycat_id) item_id from item_keycat   group by upc_id,sale_location_id) u   where k.upc_id = u.upc_id and  k.item_keycat_id = u.item_id  and k.sale_location_id=u.sale_location_id  and k.sale_location_id=sale_loc) i1  where e1.upc_id = i1.upc_id  --added to get Original_Keycat_Suggestion  and  e1.eff_date in(select max(eff_date) from item_keycat where sale_location_id = sale_loc and upc_id = e1.upc_id  and eff_date >= trunc(sysdate-365)))    Loop    begin    select count(distinct keycat_id) into v_outbuffer1  from item_keycat where sale_location_id = sale_loc and upc_id = i.upc_id; -- Get the keycat count          v_outbuffer := v_outbuffer1;          Exception          When No_data_found then          v_outbuffer := 'NULL';    End;    begin    select count(*) into v_outbuffer1  from item_keycat where sale_location_id = sale_loc and upc_id = i.upc_id ;--Get the number of keycat actions            v_outbuffer := v_outbuffer ||','||v_outbuffer1;            Exception          When No_data_found then          v_outbuffer := v_outbuffer ||','||'NULL';    End;      v_tempbuffer:=i.upc_id||','||i.upc_description||','||i.sale_location_id||','||i.keycat_id||','||i.xlong_name||','||i.maintained_flag||','||i.approval_code||','||  i.eff_date||','||i.current_user_id||','||i.responsible_party_name||','||i.user_status||','||v_outbuffer||','||i.Original_Keycat_Suggestion;      UTL_FILE.PUT_LINE(v_outfile,v_tempbuffer);      End Loop;  End Loop;  UTL_FILE.FCLOSE(v_outfile);    --End;  END Rekey_Report_allcntries;  /  

Errors in above script are as follows:

Error(16,11): PL/SQL: SQL Statement ignored  Error(16,51): PL/SQL: ORA-00918: column ambiguously defined  Error(58,1): PL/SQL: SQL Statement ignored  Error(59,65): PL/SQL: ORA-00904: "I"."UPC_ID": invalid identifier  Error(59,65): PLS-00364: loop index variable 'I' use is invalid  Error(71,1): PL/SQL: SQL Statement ignored  Error(72,65): PLS-00364: loop index variable 'I' use is invalid  Error(72,65): PL/SQL: ORA-00904: "I"."UPC_ID": invalid identifier  Error(83,1): PL/SQL: Statement ignored  Error(83,15): PLS-00364: loop index variable 'I' use is invalid  

Can anyone please help. Thanks

mongostat causing CPU 100%

Posted: 12 Jul 2013 01:39 PM PDT

On a 64Bit Debian 6 OS with 4 CPU cores and 8GB RAM, I can reproduce an issue with mongostat.

Whenever I create an index, reIndex or even index in background, according to top, the CPU load is 40-60% and RAM is filling up to 90%. While index task is running, I try mongostat and watch the CPU load. Immediately, CPU load is nearly 100% and blocking.

mongostat

connected to: 127.0.0.1

Nothing after that, only CPU load at the limit until Ctrl + C for stopping mongostat. Few seconds later, CPU load scales down to ~50% and all is fine again.

Trying mongostat again with the same behaviour...

Is mongodb expecting more CPUs or what's wrong?

How do I get Pentaho Reporting/Kettle to produce PDF reports by business entity?

Posted: 12 Jul 2013 12:42 PM PDT

We use Kettle/PDI to handle usage records from legacy systems and feed them to an analysis database/DWH and also we report to our customers their activity and a backend commission report on a monthly basis from this legacy data.

Right now, I compile this into one enormous PDF using Pentaho Report Designer, then print it and hand it to the Finance gals who send checks out to the customer for the amount on the statement I've made. They hand-collate all this.

Obviously we would like to put these online, since we have a file-area for each customer already. The thing I need is for Pentaho to make, instead of a new page, a new PDF file for each business, and then name it the Business ID number and month, or something equally meaningful. Is this possible?

We have experimented with splitting up the PDF but it takes someone several hours to process, and it is not pleasant work at all. It seems this should be possible, but I do not know enough of the intricacies of Pentaho to make it work.

Relational Database. Data Input. SQL/PHP [on hold]

Posted: 12 Jul 2013 12:12 PM PDT

I am starting to explore database design using php and sql; just looking to bulk out high level understanding as I read through the details of SQL, PHP and RDBMS.

Example

Two tables: CUSTOMER and ADDRESS; customerid and addressid are the respective auto-incrementing primary keys. The addressid is a foreign key in the customer table.

Goal: Customers use a web form to populate the database with their customer and address details.

I split out addresses into a separate table in order to avoid data duplication. If a customer is filling out a web form and has an address that is already stored, how would I go about creating a lookup to select and link new customer details with the address already there? starting to enter the address will create a new address record distinguished from existing records by the automatically created new addressid.

Appreciate this question is basic and probably similar to others somewhere on the web; any pointers towards useful articles/information sources would be much appreciated!

Kind regards,

Installing Full Text Search to SQL Server 2008 R2

Posted: 12 Jul 2013 01:03 PM PDT

I installed sql server 2008 R2 Express successfully, then realised I need to get Full Text Search. So I downloaded the "Advanced Services Installation Package", but when I run it there is no option in the "Feature Selection" part for Full Text Search. Please don't tell me I hav to uninstall and reinstall?

multi master replication

Posted: 12 Jul 2013 02:22 PM PDT

Let us say I have a master(m) slave(s) replication. Now, in the scheme of things, I introduce another database/schema(foo) not associated with the master in any way and I want this schema to be replicated into the slave(s). Can I do this?

AFAIK, this cannot be done. What is the best way to pull this off? The reason I want to do this is I want to join tables from foo with s/m.

The data replication need not happen in real time, can be a daily cron job too. Is mysqldump the way to go? Is there some hack that I can pull off?

SQL server windows authentication through network

Posted: 12 Jul 2013 12:44 PM PDT

I am using sql server 2008 r2.There are about 50 pc in my office connected in network.In a sql server 2008 r2 express(installed in windows 8) i have created all users windows account with same name as their name in their pc and then created windows authentication user in sql. Now all the users are able to connect to sql server using windows authentication.

but now i am trying to do same for the another sql server 2008 r2 express which is installed in windows xp sp3. But it is not working when i try to connect to sql server using windows authentication from network pc message comes like "Login failed for user 'PC91\Guest'. " It is not recognizing the windows account of my pc. why it happens? Please tell me a solution.

How to merge two rows in a table? [on hold]

Posted: 12 Jul 2013 11:12 AM PDT

I have a table called Person with columns Col1, Col2, Col3, etc.

In Col1 I have two values: ABC and XYZ.

What I want is to replicate all the values from the other columns (Co2, Col3, etc.) for the row ABC to XYZ.

What is the easiest way to do this?

linked list in SQL

Posted: 12 Jul 2013 10:48 AM PDT

Though SQL is more affiliated with table like operations and not so much with recursive, say we would like to implement the linked (or double-linked) list concept (like we have for instance in C).
Is there any way to do this efficiently, considering that we can have items moving around from any place into any place on a linked list?
Some solution using CLR?
Or is it really something that should never be brought to SQL Server?

Though I pinned SQL Server, this is an academic like question, so a solution in any other is also good, even if we just get to the conclusion that this is something that should never be brought to the database.

Most efficient ordering post database design

Posted: 12 Jul 2013 12:06 PM PDT

I have posts table that is has post_order column. I store order of each post in it. when I change the order of a row from 25 to 15, I should update all the row from 15 to end. It's good for few rows, But in thousands rows it's worst.

Is there any better design for ordering posts, that is more efficient?

How to do incremental/differential backup every hour in Postgres 9.1?

Posted: 12 Jul 2013 09:05 PM PDT

Trying to do an hourly hot incremental backup of a single postgres server to s3.

I have the following setup in postgresql.conf:

max_wal_senders=2  wal_level=archive  archive_mode=on  archive_command='copy "%p" "c:\\postgres\\archive\\%f"'  

I did a base backup with pg_basebackup -U postgres -D ..\archive -Ft -l 2013-07-07 -x

Which made a big base.tar file in the archive folder and added some long file name files, which I assume are the WALs.

pg_start_backup('label') and pg_stop_backup() seem to create the WAL files in xp_log, and then copy them to the archive folder.

  1. what command(s) do I run to do a new incremental backup (pg_basebackup does a new base backup which I don't want right now)?

  2. What does the label in pg_basebackup and pg_start_backup() do exactly?

  3. WAL Files don't seem to be removed from pg_xlog. What should I do about that? It seems to keep 5 or so WALs in pg_xlog. Is that to be expected?

  4. Do I need to backup the .backup files in the archive folder or just the 16,384KB WAL files?

  5. should I use the --xlog parameter and if so do I need to change wal_keep_segments from 0?

Deploying to SSIS catalog and get permissions error

Posted: 12 Jul 2013 12:58 PM PDT

When I attempt to deploy to SSIS 2012 Catalog on new server I get an error. I have researched it on the web for several hours and all of the information available online does not fix my issue.
enter image description here

InnoDB Failure of some kind

Posted: 12 Jul 2013 01:56 PM PDT

I have MySQL 5.5 installed. I tried to install Joolma but it failed. I went into their sql and replace EGNINE=InnoDB with MyISAM and the install worked.

InnoDB is listed under SHOW ENGINES;

Any idea what the cause is or how to fix this so other InnoDB sites can be used?

I had these errors:

MySQL ERROR No: 1030  MysqL Error: Got error -1 from storage engine  

MySQL is running but not working

Posted: 12 Jul 2013 03:56 PM PDT

In an attempt to tune MySQL to make it work with a recent installation of Drupal I had to modify the MySQL settings on my server. After modifying the configuration file for MySQL (/etc/my.cnf) MySQL stopped working. After some attempts I make it start again but now all my php/MySQL webistes are not being able to connect to their DBs.

Here is why is so confusing:

  • If I check a phpinfo.php on any given site, the MySQL info is there
  • phpmyadmin runs just fine
  • I can run mysql from SSH using root but I see that mysql activity is reduced, look:

[root@server mysql]# mysqladmin processlist

+-----+-----------+-----------+-----------+----------------+------+--------------------+------------------+  | Id  | User      | Host      | db        | Command        | Time | State              | Info             |  +-----+-----------+-----------+-----------+----------------+------+--------------------+------------------+  | 7   | eximstats | localhost | eximstats | Sleep          | 30   |                    |                  |  | 103 | DELAYED   | localhost | eximstats | Delayed insert | 30   | Waiting for INSERT |                  |  | 104 | DELAYED   | localhost | eximstats | Delayed insert | 149  | Waiting for INSERT |                  |  | 105 | DELAYED   | localhost | eximstats | Delayed insert | 149  | Waiting for INSERT |                  |  | 119 | root      | localhost |           | Query          | 0    |                    | show processlist |  +-----+-----------+-----------+-----------+----------------+------+--------------------+------------------+  

My websites using MySQL almost all say:

Error establishing a database connection  

Another say:

Warning: mysql_connect() [function.mysql-connect]: Access denied for user 'website_USER'@'localhost' (using password: NO)  

This is my current my.cnf:

[mysqld]  #datadir=/var/lib/mysql  socket=/var/lib/mysql/mysql.sock  #pid-file=/var/lib/mysql/mysqld.pid  #skip-bdb  #skip-innodb  #err-log=/var/log/mysql.log  #bind-address = server.escalehost.com  log-bin = /var/lib/mysql/log-bin      #innodb_buffer_pool_size=2M  #innodb_additional_mem_pool_size=500K  #innodb_log_buffer_size=500K  #innodb_thread_concurrency=2  #max_connections=125  #table_cache=2500  #thread_cache_size=100  #thread_stack=250K  #wait_timeout=10  #join_buffer=5M  #myisam_sort_buffer_size=15M  #query_cache_size=15M  #read_rnd_buffer_size=5M  max_allowed_packet = 64M  #open_files_limit=8602    #[client]  #port           = 3306  #socket=/var/lib/mysql/mysql.sock    #[mysql.server]  #user=mysql  #basedir=/var/lib    [mysqld_safe]  #socket=/var/lib/mysql/mysql.sock  #err-log=/var/log/mysqld.log  pid-file=/var/run/mysqld/mysqld.pid  

I commented most of it to return it to its simplest version... How can I make the web side to connect to mysql?

Mistake during Oracle 11g PITR

Posted: 12 Jul 2013 10:56 AM PDT

I tried using set time until.. and mis-typed the date. Can anyone help me understand how to get my backups into a manageable state?

After the accidental recover, most of my backupset disappeared.

I recovered them and used 'catalog recovery area' .. and they're listed in 'list backupset'. But something still isn't right.

When I do a PITR now, I get messages that my dbf files aren't available

and... the 'list backupset' seems to show backupsets. But they are listed differently than the files which weren't included in the 'bad' recovery.

Gists with the error and the list of backupsets are here https://gist.github.com/akinsgre/5561254

How can I query data from a linked server, and pass it parameters to filter by?

Posted: 12 Jul 2013 04:00 PM PDT

I have a really big query that needs to be run on multiple databases, and the results appended to a temp table and returned.

The basic syntax looks something like this:

INSERT INTO #tmpTable (Id, ...)    SELECT T1.Id, ...  FROM Server.Database.dbo.Table1 as T1  INNER JOIN #tmpIds as T ON T1.Id = T.Id    INNER JOIN Server.Database.dbo.Table2 as T2 ON T1.Id = T2.Id  INNER JOIN Server.Database.dbo.Table3 as T3 ON T2.Id = T3.Id  LEFT OUTER JOIN Server.Database.dbo.Table4 as T4 ON T3.Id = T4.Id  LEFT OUTER JOIN Server.Database.dbo.Table5 as T5 ON T4.Id = T5.Id  LEFT OUTER JOIN Server.Database.dbo.Table6 as T6 ON T5.Id = T6.Id  

The query runs quickly if run locally on the the individual servers, however it takes a long time to run if it's run from a linked server using the 4-part names like above.

The problem appears to be it's querying the linked server for the unfiltered result set first, then joining it to the #tmpIds table on the local server afterwards, which makes the query take an very long time to run.

If I hardcode the Ids to filter the result set on the linked server, such as

SELECT T1.Id, ...  FROM Server.Database.dbo.Table1 as T1  -- INNER JOIN #tmpIds as T ON T1.Id = T.Id  INNER JOIN Server.Database.dbo.Table2 as T2 ON T1.Id = T2.Id  INNER JOIN Server.Database.dbo.Table3 as T3 ON T2.Id = T3.Id  LEFT OUTER JOIN Server.Database.dbo.Table4 as T4 ON T3.Id = T4.Id  LEFT OUTER JOIN Server.Database.dbo.Table5 as T5 ON T4.Id = T5.Id  LEFT OUTER JOIN Server.Database.dbo.Table6 as T6 ON T5.Id = T6.Id    WHERE T1.Id IN (1, 2, 3)  

it runs quickly in just a few seconds.

Is there a way to run this query so it filters the result set of the query from the linked server by the #tmpId table first, before returning the result set to the local server?

Some things to note

  • The query is very large and complex, and Dynamic SQL is not a viable option for it due to the maintenance nightmare that causes.

    I would be open to suggestions about how to use Dynamic SQL for something else though, such as running a stored procedure or UDF if there's a way to do that over a linked server (tried a few different methods such as sp_executeSQL, OPENROWSET, and OPENQUERY, but those all failed).

  • Because it uses the 4-part naming convention, I cannot use a UDF on the remote server
  • Distributed Transactions are disabled, so the following does not work

    INSERT INTO #table   EXEC Server.Database.dbo.StoredProcedure @ids  

DBCC CHECKDB Notification

Posted: 12 Jul 2013 12:28 PM PDT

There are plenty of questions on DBA.SE regarding DBCC CHECKDB and how to resolve problems when errors are returned. My specific question is on actually getting notified that DBCC CHECKDB returned errors. Most all DBAs know that you can automate the command and should run it often.

I came across this article by Cindy Gross, which has some very good notes. In it she mentions use of SQL Server Agent that if it finds errors from the execution of the CHECKDB command it will fail that step (or job depending on configuration). She points to Paul Randal's blog post on the topic here.

Now I am curious if anyone knows that the Check Database Integrity Task in a maintenance plan would do the same thing? MSDN does not mention that it will and I have not truthfully been an environment where it has come across a corruption issue; so can't say that it does. This would be versus simply setting up a SQL Agent Job with multiple steps that runs the specific command against each database, as Cindy suggested.

Thoughts? Obviously proof is in the pudding so providing more than just a guess would be helpful...

Problem with order by in MySQL subquery

Posted: 12 Jul 2013 03:32 PM PDT

I'm running a query that runs in about 2 seconds when the subquery does not include the order by date_sold clause. However, when I add the order by clause, the query runs forever.

I'm confused, since it seems that I'm using a index for both queries, and as I've understood, the subquery should be able to use the index for the subquery. Any ideas where I went wrong?

EXPLAIN Select buyer_id as bid,                 (SELECT seller_id                 from sale USE INDEX(tester)                 where buyer_id = bid                 order by date_sold LIMIT 1)           from sale where seller_id = 35514335;  +----+--------------------+-------+-------+-----------------------------------+--------+---------+-------+-------+--------------------------+  | id | select_type        | table | type  | possible_keys                     | key    | key_len | ref   | rows  | Extra                    |  +----+--------------------+-------+-------+-----------------------------------+--------+---------+-------+-------+--------------------------+  |  1 | PRIMARY            | sale  | ref   | seller_id,seller_id_index_asc,sub | sub    | 8       | const | 14933 | Using index              |  |  2 | DEPENDENT SUBQUERY | sale  | index | NULL                              | tester | 24      | NULL  |     1 | Using where; Using index |  enter code here  

The table structure

| id               | int(11)    | NO   | PRI | NULL    | auto_increment |  | seller_id        | bigint(20) | NO   | MUL | NULL    |                |  | buyer_id         | bigint(20) | NO   | MUL | NULL    |                |  | date_acquired    | bigint(20) | NO   |     | NULL    |                |  | date_sold        | bigint(20) | NO   | MUL | NULL    |                |  | brand            | text       | NO   | MUL | NULL    |                |  | muncipality_code | bigint(20) | NO   | MUL | NULL    |                |  | new              | int(11)    | YES  |     | NULL    |                |  | car_regnumber    | varchar(6) | YES  |     | NULL    |                |  | gender           | varchar(1) | YES  |     | NULL    |                |  

Tester index

| sale  |          1 | tester              |            1 | date_sold        | A         |       14840 |     NULL | NULL   |      | BTREE      |         |  | sale  |          1 | tester              |            2 | buyer_id         | A         |    11768564 |     NULL | NULL   |      | BTREE      |         |  | sale  |          1 | tester              |            3 | seller_id        | A         |    11768564 |     NULL | NULL   |      | BTREE      |         |  

SHOW CREATE TABLE SALE

| sale  | CREATE TABLE `sale` (    `id` int(11) NOT NULL AUTO_INCREMENT,    `seller_id` bigint(20) NOT NULL,    `buyer_id` bigint(20) NOT NULL,    `date_acquired` bigint(20) NOT NULL,    `date_sold` bigint(20) NOT NULL,    `brand` text NOT NULL,    `muncipality_code` bigint(20) NOT NULL,    `new` int(11) DEFAULT NULL,    `car_regnumber` varchar(6) DEFAULT NULL,    `gender` varchar(1) DEFAULT NULL,    PRIMARY KEY (`id`),    KEY `muncipality_code` (`muncipality_code`),    KEY `brand` (`brand`(10)),    KEY `seller_id` (`seller_id`),    KEY `seller_id_index_asc` (`seller_id`),    KEY `date_sold` (`date_sold`),    KEY `tester` (`date_sold`,`buyer_id`,`seller_id`)  ) ENGINE=MyISAM AUTO_INCREMENT=66390336 DEFAULT CHARSET=latin1 ROW_FORMAT=FIXED |  

How to check growth of database in mysql?

Posted: 12 Jul 2013 04:56 PM PDT

I want to know is there any method to check the growth of database on file

EXAMPLES

  • Database A contains all tables in INNODB storage engine
  • Database B contains all tables in MyISAM storage engine
  • Database C contains a mixture of InnoDB and MyISAM tables

[SQL Server] Using Dynamic SQL in Stored Procedures


Using Dynamic SQL in Stored Procedures




Weekly SQL Server newsletter with articles, forum posts, and blog posts via email. Subscribers receive our white paper with performance tips for developers.



SQLTeam.com Articles via RSS


SQLTeam.com Weblog via RSS



Read more

sqlteam.com



[MS SQL Server] SQL 2008 R2 setup.exe does nothing

[MS SQL Server] SQL 2008 R2 setup.exe does nothing


SQL 2008 R2 setup.exe does nothing

Posted: 11 Jul 2013 06:16 AM PDT

I have a Windows 7 64 bit machine running SQL Server 2008 Enterprise.I would like to upgrade it to SQL Server 2008 R2 Enterprise. I loaded the media on the machine, but when i double click on the setup.exe, it displays a command box for a split second, goes away, and then nothing happens.I tried rebooting the box, running as Administrator (by the way, I am part of local Admin group on this box), but nothing happens when I double click on setup.exe.I have used this same media on other machines and it works fine. Are there any logs that I can check to see what may be causing the issue?thanksscott

Can I migrate a publisher to 2012 and have it continue to replicate to 2008R2 Subscriber without missing a beat

Posted: 12 Jul 2013 12:47 AM PDT

I will be migrating our Production DB to SQL 2012 shortly (no service interruption using mirroring from 2008R2 to 2012). that part works well. But our Production DB is a publisher feeding a Reporting DB and I'm not clear if replication can pick up from where it left off after the migration or whether I have to re-build the publication / Snapshot from scratch and whether I can continue to use the existing 2008R2 distributor or have to switch to a 2012 distributor. Either way I would like to know the prescribed upgrade method from some experts who have given this a shot.Are there good articles (advice) on the subject?I'd rather not have to fully re-build the Transaction publication but could do that (including the lengthy snapshot) if that is the best/safest approach.In which case I might just re-configure the whole lot now to a 2012 Distributor now and make the new subscriber Reporting database a 2012.Thoughts?...Ray

log file usage issue

Posted: 12 Jul 2013 12:39 AM PDT

HI allhere sql server log file usage is showing as 100% in task manager but internally it is showing as 1mb only how to solve this issues.

Converting a trace to an extended events package

Posted: 11 Jul 2013 11:56 PM PDT

There's a trace I'm required to have running on all my SQL servers 24x7. This trace logs security information to a file. What I'd like to look into doing is converting this trace to an extended events session, as XEs are (supposed to be) lighter-weight than a trace.The problem is, all my servers are running SQL2008R2, so I don't have a pretty GUI to create the XEs in, nor will I any time soon.Does anyone know of a site or way to cross-reference a trace event to an XE? I've been trying to find one, found one query, but it requires a DMV that's not in SQL08...Thanks,Jason

Sql Server - Index ReBuild

Posted: 11 Jul 2013 04:39 PM PDT

I will be planning to do the Rebuild/ Reorg Indexes as my client never had done this and never run update statics.But my problem is that we have replication DB and it's runs very fw minutes as we are subscriber.If I do the Rebuild Index and it will lock the table during maintenance.We are running some evening and nightly jobs too.Replication DB is Big around 200 GB, will be a problem if i do the Index maintenance during week end and publication running?I have already ran the script to identify the Index fragmentation and know which index needs to be Rebuild or Reorg.Thanks,

Execute permission denied

Posted: 11 Jul 2013 09:19 PM PDT

Hi Friends,I should create a user who can create a job or execute the job or disable the job or enable the job. Is it possible to do without giving "sysadmin" privileges?

Hai all

Posted: 11 Jul 2013 05:02 PM PDT

can anyone please explain me "why there will be connection timeout errors in SQL server" any causes for that and paossible resolutions for that ? how to torubleshoot those causes and to fix it?Thanks in advance,RAJ.

The distribution agent failed to create temporary files

Posted: 26 Sep 2012 03:42 AM PDT

HiI'm getting this error with a transactional replication setup on SQL 2008R2 Standard (SP1): The distribution agent failed to create temporary files in 'C:\Program Files\Microsoft SQL Server\100\COM' directory. System returned errorcode 5. (Source: MSSQL_REPL, Error number: MSSQL_REPL21100)I've been to [url]http://support.microsoft.com/kb/956032/en-us[/url] amongst other places, and as a result given full control to Everyone on the C:\Program Files\Microsoft SQL Server\100\COM folder (I tried the SQL Agent and Distribution agent accounts first). I've also disabled on-access scanning for the anti-virus software. It's a 2-node cluster so I've done this for both nodes.Is there anything else I can try in order to troubleshoot further? - only this one has got me completely stumped.Thanks everyone,Duncan

[Articles] The Vacation Struggle

[Articles] The Vacation Struggle


The Vacation Struggle

Posted: 11 Jul 2013 11:00 PM PDT

It can be hard to take all your time off each year and Steve Jones is in that situation right now. However he reminds us that life is more important than work.

sqlprompt Make working with SQL a breeze
SQL Prompt 5 is the effortless way to write, edit, and explore SQL. It's packed with features such as code completion, script summaries, and SQL reformatting, that make working with SQL a breeze. Try it now.

[SQL 2012] Apply 2012 SP1 CU3 to Primary server of mirrored pair.

[SQL 2012] Apply 2012 SP1 CU3 to Primary server of mirrored pair.


Apply 2012 SP1 CU3 to Primary server of mirrored pair.

Posted: 12 Jul 2013 12:44 AM PDT

Hi, I have 2 SQL2012 64 bit servers that are set up as a mirrored pair. There is no witness. The mirror already has SP1 CU3 on it and I now need to patch the Primary server to the same level. Fortunately at the moment we don't need ultra high availability on this system so I can get a bit of downtime without having to think about failing over or using the mirror whilst I do the install. Therefore I can literally just go on to the Primary server and apply the patch when there are no connections.However before I start, do I just need to 'pause' mirroring whilst I do this and 'resume' it when it's complete?Thanks

status

Posted: 11 Jul 2013 11:15 PM PDT

i have 3 instances in sql one is default and other's are named . when i checked the status of the sql server , one of the sql service is in executing mode , though the service account has privileges on the sql .how can we make the service to normal with out restarting .

My sql script not getting all my records

Posted: 11 Jul 2013 11:04 PM PDT

HiI am trying to write a script to get all record where the AccountIDNameis not equal to some companies, status = 1 and stc.value = 'F - Family'This part works for me...However... I also need to grab records wherestatus=1, stc.value = 'F - Family' and the AccountIDName is nullI can't seem to figure out how to write this part.,Below is my code:[code]SELECT CON.AccountIdName, CON.FirstName, CON.LastName, STC.Value FROM dbo.Contact CONLEFT OUTER JOIN StringMap STC ON STC.attributename = 'new_coveragetype' AND STC.attributevalue = CON.new_CoverageType AND STC.objecttypecode = 2 AND STC.langid = 1033WHERE CON.StatusCode = 1 AND STC.Value = 'F - Family' AND CON.AccountIdName NOT IN ( 'Company1', 'Company2', 'Company3', 'Company4', 'Company5, 'Company6' )[/code]

NT AUTHORITY\ANONYMOUS LOGON

Posted: 11 Jul 2013 10:13 PM PDT

Hello,I am having a problem with an SSIS 2012 package contacting a database on a separate server. When run manually, under the Integration Services Catalog, the package is failing to contact the database on the separate server. The separate server is recording login failures for [NT AUTHORITY\ANONYMOUS LOGON].SSIS is running under a domain account which has access to the server, however the second server seems not to recognise the account.Has anyone seen this before? Any help or advice would be most appreciated.Andrew

Can connect via SQLCMD but not linked server

Posted: 11 Jul 2013 03:26 AM PDT

I am a contractor DBA, and we have about 60 SQL servers of all versions (I mean, all, include 7) and flavors (desktop, express, std, ent...) I am trying to connect to all of them, either via scripts or linked servers for the purposes of collecting data on them. Some of these servers have no other accounts than 'builtin\administrator' and sa, but I can connect to them either via SQLCMD or SSMS (or Enterprise Manager) either from my desk (I have SSMS 2012 installed) or on the servers desktop, since I am a domain admin. So, there are no servers that I don't have some access to.But, in trying to create a linked server to a handful of these, I haven't been able to find any combo of provider, provider string, driver, etc, that will allow me to create a linked server. The SQL server I am trying to link from is 2012 Std.I know anyone will need more info to help me with this, so this is just a start.Thanks,David

[T-SQL] performance of sp

[T-SQL] performance of sp


performance of sp

Posted: 12 Jul 2013 12:55 AM PDT

himy sp is taking 5 min in prod.any way i can improve performanceBEGIN TRY DECLARE @alerts TABLE (BatchId int, ConsumerId BIGINT, MemberId INT, RId INT, Category INT, MailToMem BIT, MailToProv BIT, SortOrder varchar(25), StatusVARCHAR(8), ReasonCD varchar(8), Active bit ) DECLARE @Mid TABLE (ConsumerId BIGINT) INSERT INTO @Mid ( ConsumerId )SELECT DISTINCT ConsumerId FROM MCID_XREF xref WITH (NOLOCK) WHERE MemberId = @MemberID --List of all alerts based on MemberID OR MasterConsumerID INSERT INTO @alerts ( BatchId , ConsumerId , MemberId , RId , Category , MailToMem , MailToProv , SortOrder , Status, ReasonCD, Active SELECT BatchId , ConsumerId , MemberId , RId , Category , MailToMem , MailToProv , SortOrder , Status, ReasonCD, Active FROM dbo.Alerts a WHERE MemberId = @MemberID OR ConsumerId IN (SELECT ConsumerId FROM @Mid) select a.MemberId,a.SortOrder ,a.Status ,a.ReasonCD ,a.Active, r.Rid,r.RType,r.Sensitive,r.Weight,r.ROI,r.Program,r.Mgmt_Ranking,r.Significant, (select top 1 Name from library WITH (NoLock) where rID = r.ruleid) as ruleName, min(b.analysisasofdate) initiallyIdentified, max(b.analysisasofdate) mostRecentlyIdentified, a.category, v_cnt.ResponseCategoryCount, max(b.alertBatchId) batchid, r.harvardmednumber from library r WITH (NoLock) inner join @alerts a on a.ruleid = r.ruleid and a.alertbatchid = r.alertbatchid inner join batch b WITH (NoLock) on b.alertbatchId = a.alertbatchId and (a.mailToProv = 1 or a.mailToMem = 1) left outer join dbo.fn_CategoryCount(@memberid) v_cnt on v_cnt.ruleid = a.Rid and a.MemberId = v_cnt.memberid where a.MemberId = @MemberID END TRYBEGIN CATCH--ERROR END CATCHGOany help?

Locking on update...inner join

Posted: 12 Jul 2013 12:36 AM PDT

Just wanted to ask about the following scenario.Update [table1]set table1.value = subquery1.valuefrom(subquery) inner join [table1]Just wondered about the locking effects specifically on table1, could it cause issues where the update has an exclusive lock but in turn that prevents the join? I would imagine that would not be the case, asking those a lot more knowledgable that I am out of curiousity!

MIN and MAX values with a subquery.

Posted: 11 Jul 2013 07:59 AM PDT

Hi Everyone I have the following query in one of my databases. It returns 20 rows. SELECT C_ID, H_Type, Name, A_Name, D_Date as 'First Date', D_Status, D_TYPEFROM DH_infoWHERE (D_Date IN (SELECT MIN(D_Date) AS First_date FROM DH_info GROUP BY C_ID, D_TYPE))I now have to add MAX(D_DATE) to the query . The query should returnC_ID, H_Type, Name, A_Name, MIN(D_Date) as 'First Date', [b]MAX(D_Date) as 'Last Date'[/b], D_Status, D_TYPECan any one help me as I have been working on this for hours without luck.

archive process

Posted: 11 Jul 2013 06:41 AM PDT

hii need to Create process to archive the records ,anybody has any script or any ways to do itthanks

If Exists

Posted: 11 Jul 2013 08:16 PM PDT

Hi all, I just want to know that in my below code is it possible to combine first two exists condition in a single exists and last two in a single exists so that I can put AND condition between them. As my script can give me wrong output as if any condition is true it will give me result as True only.If @firstvoucher value is true and @lastvoucher value is wrong then it will give me output as True only which is wrong...[code="sql"]IF EXISTS ( SELECT gv.VoucherNo FROM GV_Voucher As gv INNER JOIN GV_ReceivedOffice ro ON ro.VoucherNo = gv.VoucherNo INNER JOIN GV_VoucherStatus As gvs ON gv.VoucherStatusId = gvs.VoucherStatusId WHERE gvs.VoucherStatus = 'Active at HO' AND gv.VoucherNo = @FirstVoucher)OR EXISTS( SELECT gv.VoucherBookletNo FROM GV_StoreAllocation As gv JOIN GV_Voucher v on v.VoucherNo = gv.VoucherBookletNo JOIN GV_STNDetails stn ON stn.STNNO = gv.STNNo INNER JOIN GV_VoucherStatus As gvs ON v.VoucherStatusId = gvs.VoucherStatusId WHERE stn.AWBNo IS NULL AND gvs.VoucherStatus = 'Dispatched' and v.VoucherNo = @FirstVoucher)AND EXISTS ( SELECT gv.VoucherNo FROM GV_Voucher As gv INNER JOIN GV_VoucherStatus As gvs ON gv.VoucherStatusId = gvs.VoucherStatusId INNER JOIN GV_ReceivedOffice ro ON ro.VoucherNo = gv.VoucherNo WHERE gvs.VoucherStatus = 'Active at HO' AND gv.VoucherNo = @Lastvoucher)OR EXISTS( SELECT gv.VoucherBookletNo FROM GV_StoreAllocation As gv JOIN GV_Voucher v on v.VoucherNo = gv.VoucherBookletNo JOIN GV_STNDetails stn ON stn.STNNO = gv.STNNo INNER JOIN GV_VoucherStatus As gvs ON v.VoucherStatusId = gvs.VoucherStatusId WHERE stn.AWBNo IS NULL AND gvs.VoucherStatus = 'Dispatched' AND v.VoucherNo = @Lastvoucher)BEGIN PRINT 'Correct voucher nos'ENDELSE BEGIN RAISERROR('User Define: VOucher Already Sold',16,1) END[/code]

sp_send_dbmail Query

Posted: 11 Jul 2013 09:01 AM PDT

Regarding the @query parameter, BOL states: "Note that the query is executed in a separate session, so local variables in the script calling sp_send_dbmail are not available to the query."I'm calling sp_send_dbmail from a code that updates records in a table. I want to send some information about the record that was updated, including data from child tables joined to the updated record.Since I can't pass a local variable to @query, I'm doing something like:[code="sql"]@query = 'DECLARE @RecordID AS Varchar(12) SET @RecordID = (select top 1 Record_ID FROM Table where ColumnChangedFrom1 = 0 ORDER BY Modify_Date DESC); SELECT t1.column1, t2.column1, t2.column2 FROM Table AS t1 INNER JOIN Table2 t2 on t1.RecordID = t2.T1RecordID WHERE t1.Record_ID = @RecordID'[/code]While this works so far in testing, I'm not certian that in production it will return the correct records 100 percent of the time. Is there a way to be certain the dbMail query will find the right records?

Splitting a Full Name

Posted: 11 Jul 2013 07:54 AM PDT

I'm trying to split a full name column up into First / Middle / Last....but some of the names are like "Joe B. W. LastName" or "Jenny MiddleName MaidenName MarriedName" and so on...In the past I've used parsename() and charindex() to solve this with "cleaner" data. How would I go about handling these?Thanks

APPLY Syntax

Posted: 11 Jul 2013 08:40 AM PDT

I got the following example of some code to use an OUTER APPLY to get some data I need. I was able to get it to work in my environment, but there's one piece of the syntax I don't understand.[code]select Table1Key, MAX(LastModifiedDate)from Table1outer apply ( select Table1.LastModifiedDate union all select LastModifiedDate from RTable01 where RTable01.Table1Key = Table1.Table1Key union all select LastModifiedDate from RTable02 where RTable02.Table1Key = Table1.Table1Key union all select LastModifiedDate from RTable30 where RTable30.Table1Key = Table1.Table1Key ) ---------------------------------------------------------AllTables(LastModifiedDate)----------------------------------------------------------group by Table1.Table1Key[/code]What is the AllTables() statement at then end of the OUTER APPLY() doing? I can't find any syntax references to it, and the whole thing doesn't work without it, so I feel like I should find out what it's doing!

passing different values into a variable

Posted: 11 Jul 2013 02:55 AM PDT

HiI have a an SP that returns the size of the backup files into a table. To execute the Sp i have to pass in the paramaters @db and @type.@db is the database name and @type is the type of backupfile So for example i would type[code="sql"]exec dbo.Sp_GetBackup 'adventureworks','d'[/code]This would return the adventureworks full backup history.My question is how can i automate this so i dont have to manually enter the database name and the type of backupfile i want it to return.

Finding the most recently modified row from a group of tables

Posted: 11 Jul 2013 04:07 AM PDT

I'm thinking there's some clever way of doing this, probably involving a CTE and recursion, but I'm not seeing how to get started....Given a [Table1] with a primary key called [Table1Key].There are 30 tables with [Table1Key] as a foreign key pointing back to [Table1], call these [RTable01] through [RTable30]. Any specific Table1 row may have related rows in 0 to 30 of these tables. Each of the related tables has a [LastModifiedDate], as does the parent Table1 row.Given two Table1 rows, I want to flag one as 'Most Current' based on the latest of ANY of the LastModifiedDates in related rows and the parent row.I can certainly do this in a brute force ROAR fashion, but I'm trying to stop doing things like that. Any pointers?

Make the formula configurable (can use in different SSIS packages)

Posted: 03 Oct 2012 08:35 PM PDT

I have to use same formula to fill the calculated data in different tables. I want to make this formula configurable so that no need to write formula everytime simply pass the values to the formula and it will return the calculated value.SSIS packages are used to populate the data in the tables and I can't use SQL User defined function because that is decreasing my SSIS package performance.Please suggest some good approach.Help is really appreciable.

stored procedure keeps spinning

Posted: 11 Jul 2013 01:28 AM PDT

hiwhen i run sp in 1 server it keeps spinning and on other server i am getting resultwhat could be the reasoni tried with recompile but no luckthanks

Group by and Where filters

Posted: 11 Jul 2013 03:25 AM PDT

I have a query that seems to operate differently than I though SQL handled Group By with Where...I thought WHERE is used to filter the rows then the group by is done.. but what I am seeing is that the count, and SUM sections have totals for all the rows.. prior to filter.Is this correct? I thought that was how the HAVING worked, but the WHERE was before the totals were generated.

Search This Blog