Saturday, July 27, 2013

[how to] MySQL 5.6 DESTDIR not work propely

[how to] MySQL 5.6 DESTDIR not work propely


MySQL 5.6 DESTDIR not work propely

Posted: 27 Jul 2013 09:12 PM PDT

I'm try compiling MySQL 5.6 from source in Ubuntu 12.04 x64.

But when I execute make install DESTDIR="/etc/mysql", not work was expected.

MySQL is installed in /etc/mysql/usr/local/mysql.

What's wrong?

Sorry for my english

Increased server load after InnoDB deadlock

Posted: 27 Jul 2013 02:38 PM PDT

I am using InnoDB instead of MyISAM for a couple of months right now, and it has really optimized my game database-wise. I was aware of the danger of deadlocks, but hadn't had any problems with it, until about a week ago.

I've had a deadlock because two cronjobs were clashing with each other, I believe. Anyway, we've got that fixed by killing both queries and running the cronjobs after each other manually.

All seemed good, except for the fact that the server load is peeking a lot since then (about every 5 minutes) which I don't really know why. I've checked the running queries at those moments and they seem fine.

Do you have any tips where to look on how to fix this? Couldn't find too many information about this on the internet. Thanks in advance.

Re-mount EBS vol on EC2 instance [migrated]

Posted: 27 Jul 2013 12:59 PM PDT

I am new to sysadmin. I am trying to launch a EBS-backed EC2 instance. I did following based on mongodb docs:

sudo mdadm --verbose --create /dev/md0 --level=10 --chunk=256 --raid-devices=1 /dev/xvdh1  echo 'DEVICE /dev/xvdh1' | sudo tee -a /etc/mdadm.conf  sudo mdadm --detail --scan | sudo tee -a /etc/mdadm.conf  sudo blockdev --setra 128 /dev/md0  sudo blockdev --setra 128 /dev/xvdh1  sudo dd if=/dev/zero of=/dev/md0 bs=512 count=1  sudo pvcreate /dev/md0  sudo vgcreate vg0 /dev/md0  sudo lvcreate -l 90%vg -n data vg0  sudo mke2fs -t ext4 -F /dev/vg0/data  sudo mkdir /data  echo '/dev/vg0/data /data ext4 defaults,auto,noatime,noexec 0 0' | sudo tee -a /etc/fstab  sudo mount /data  

Now I terminated EC2 instance and trying to re-mount the EBS-vols to newly launched instance. What I was not sure,

  • which of the above steps I need to do?
  • Which of the above steps I can skip?
  • Which of the above steps I SHOULD NOT DO at all ?(which can make things inconsistent)

On the new EC2 instace, I tried to execute following cmd but then aborted due to warning. I was not sure

sudo mdadm --verbose --create /dev/md0 --level=10 --chunk=256 --raid-devices=1 /dev/xvdh1  mdadm: layout defaults to n2  mdadm: layout defaults to n2  mdadm: /dev/xvdh1 appears to be part of a raid array:      level=raid10 devices=5 ctime=Fri Jul 26 18:01:23 2013  

After going thru warning/errors while executing, I skipped following steps and re-mounted

sudo pvcreate /dev/md0  sudo vgcreate vg0 /dev/md0  sudo lvcreate -l 90%vg -n data vg0  

but all the data in EBS vol is lost.

If somebody could verify..that would be great help.

Creating mongo 3 node replica set on ec2 problems

Posted: 27 Jul 2013 05:52 PM PDT

I'm trying to create a 3 node cluster of mongo replicas on ec2 instances, with internal ips 10.168.xxx.xxx, 10.166.xxx.xxx and 10.188.xxx.xxx, my config file is:

dbpath = /home/ubuntu/mongo-data/  logpath = /home/ubuntu/mongo-data/mongod.log  logappend = true    journal = true  fork = true  smallfiles = true  pidfilepath = /home/ubuntu/mongo-data/mongod.pid  replSet = appNameReplicaSet  

I launched mongod -f mongod.config on each instance, and on one of those I tried:

$ mongo  > rs.initiate();  {  "info2" : "no configuration explicitly specified -- making one",  "me" : "ip-10-168-66-132:27017",  "info" : "Config now saved locally.  Should come online in about a minute.",  "ok" : 1  }  > rs.add('10.166.xxx.xxx:27017');  //after LONG pause  {  "errmsg" : "exception: need most members up to reconfigure, not ok : 10.188.22.254:27017",  "code" : 13144,  "ok" : 0  }  

I also tried:

config = {      _id: 'appNameReplicaSet',          members: [          {_id: 0, host: '10.168.xxx.xxx:27017'},          {_id: 1, host: '10.166.xxx.xxx:27017'},          {_id: 2, host: '10.188.xxx.xxx:27017'}      ]  }  rs.reconfig(config, {'force':true})  { "ok" : 0, "errmsg" : "a replSetReconfig is already in progress" }  

.log file:

Sat Jul 27 01:33:59.149 [initandlisten] connection accepted from 127.0.0.1:36931 #4 (1 connection now open)  Sat Jul 27 01:34:02.468 [conn4] replSet replSetInitiate admin command received from client  Sat Jul 27 01:34:02.470 [conn4] replSet info initiate : no configuration specified.  Using a default configuration for the set  Sat Jul 27 01:34:02.470 [conn4] replSet created this configuration for initiation : { _id: "appNameReplicaSet", members: [ { _id: 0, host: "ip-10-168-xxx-xxx:27017" } ] }  Sat Jul 27 01:34:02.470 [conn4] replSet replSetInitiate config object parses ok, 1 members specified  Sat Jul 27 01:34:02.473 [conn4] replSet replSetInitiate all members seem up  Sat Jul 27 01:34:02.473 [conn4] ******  Sat Jul 27 01:34:02.473 [conn4] creating replication oplog of size: 990MB...  Sat Jul 27 01:34:02.474 [FileAllocator] allocating new datafile /home/ubuntu/mongo-data/local.1, filling with zeroes...  Sat Jul 27 01:34:02.485 [FileAllocator] done allocating datafile /home/ubuntu/mongo-data/local.1, size: 511MB,  took 0.01 secs  Sat Jul 27 01:34:02.485 [FileAllocator] allocating new datafile /home/ubuntu/mongo-data/local.2, filling with zeroes...  Sat Jul 27 01:34:02.491 [FileAllocator] done allocating datafile /home/ubuntu/mongo-data/local.2, size: 511MB,  took 0.005 secs  Sat Jul 27 01:34:02.491 [conn4] ******  Sat Jul 27 01:34:02.491 [conn4] replSet info saving a newer config version to local.system.replset  Sat Jul 27 01:34:02.492 [conn4] replSet saveConfigLocally done  Sat Jul 27 01:34:02.492 [conn4] replSet replSetInitiate config now saved locally.  Should come online in about a minute.  Sat Jul 27 01:34:08.435 [rsStart] replSet I am ip-10-168-xxx-xxx:27017  Sat Jul 27 01:34:08.435 [rsStart] replSet STARTUP2  Sat Jul 27 01:34:09.441 [rsSync] replSet SECONDARY  Sat Jul 27 01:34:09.441 [rsMgr] replSet info electSelf 0  Sat Jul 27 01:34:10.440 [rsMgr] replSet PRIMARY  Sat Jul 27 01:34:26.513 [conn4] replSet replSetReconfig config object parses ok, 2 members specified  Sat Jul 27 01:37:45.835 [conn4] couldn't connect to 10.188.xxx.xxx:27017: couldn't connect to server 10.188.xxx.xxx:27017  

UPDATE: added output of rs.initiate() & logs;

how to create back up in mysql by writing stored procedure in workbench? [on hold]

Posted: 27 Jul 2013 06:37 AM PDT

i want to create backup on the button click of windows form app by calling a stored procedure of my database. but how to write such stored procedure i don't know in case of MySQL but i have done it before in MSSQL.

Creating a Table with parent_id from a flat table

Posted: 27 Jul 2013 07:29 AM PDT

i have a problem concerning MySQL, and i cannot find a solution to it

i have a flat table (id, city, county, state), example (1,'Beckingen', 'Merzig-Wadern', 'Saarland')

a city belongs to a county a county belongs to a state.

first, my attempt was to divide the data into 3 tables cities, counties, and states and to build up the associations via linking tables (cities_counties...)

but now i want to create a 'locations' table out of it, where county is parent of city, and state is parent of county.

locations: (id, name, parent_id, type) (type is 'city','county', or 'state')

What would be best practice to create the adjacent list from the flat list ?

i'm working in a LAMP environment, just in case that a php script would fit ..

Thanks alot Adrian

Updating a table efficiently using JOIN

Posted: 27 Jul 2013 06:17 PM PDT

I have a table that has the details of households and another that has the details of all the persons associated with the households. For the household table I have a primary key defined using two columns in it - [tempId,n]. For the person table I have a primary key defined using 3 of its columns [tempId,n,sporder]

Using the sorting dictated by the clustered indexing on primary keys, I have generated a unique ID for each household [HHID] and each person [PERID] record (the snippet below is for generating PERID]:

 ALTER TABLE dbo.persons   ADD PERID INT IDENTITY   CONSTRAINT [UQ dbo.persons HHID] UNIQUE;  

Now, my next step is to associate each person with the corresponding households ie; map a [PERID] to a [HHID]. The crosswalk between the two tables is based on the two columns[tempId,n]. For this I have the following inner join statement.

UPDATE t1    SET t1.HHID = t2.HHID    FROM dbo.persons AS t1    INNER JOIN dbo.households AS t2    ON t1.tempId = t2.tempId AND t1.n = t2.n;  

I have a total of 1928783 household records and 5239842 person records. The execution time is currently very high.

Now, my questions:

  1. Is it possible to optimize this query further? More generally, what are the thumb rules for optimizing a join query?
  2. Is there another query construct that can achieve the result I want with better execution time?

I have uploaded the execution plan generated by SQL Server 2008 for the whole script to SQLPerformance.com

Restore database - maintenance plan task

Posted: 27 Jul 2013 09:32 AM PDT

I need to schedule a task that restores a database once a week from a backup. I have setup maintenance plans for backing up databases (Management > Maintenance Plans) but cannot find out how to make a restore task - does this need to be an Execute T-SQL Statement Task, or is there a Toolbox item for it?

How to get rid of "maximum user connections" error ?

Posted: 27 Jul 2013 01:52 PM PDT

I am using MySQLi for my webapp but whenever I want to visit some particular page, I get mysqli_connect() [function.mysqli-connect]: (42000/1203): User ***_user already has more than 'max_user_connections' active connections.

I tried already to close all connections but this does not improve the situation.

Is there a way to know exactly what connections are open at any particular moment or any other useful data that can help me resolve this issue ?

BTW, I'm using PHP 5.2.17 and MySQL 5.1.

How can I "shift" certain column values to different rows in an existing table?

Posted: 27 Jul 2013 05:46 AM PDT

I'm implementing a queue as a ring buffer in a table along the lines of what Thomas Kejser talks about here: Implementing Message Queues in Relational Databases If the ring buffer fills up, I have to increase the number of slots and possibly shift some of the values around. This is part of my routine to add slots to the ring buffer

I have a table that looks like this:

SlotId    SlotVal1  SlotVal2  1          3           3  2          4           4  3          1           1  4          2           2  5          NULL        NULL  6          NULL        NULL  7          NULL        NULL  8          NULL        NULL  

I would like to "shift" the values from slots 3 and 4 forward by 4 slots to rows 7 and 8 so the table would look like this:

SlotId  SlotVal1 SlotVal2  1       3        3  2       4        4  3       NULL     NULL  4       NULL     NULL  5       NULL     NULL  6       NULL     NULL  7       1        1  8       2        2  

I tried two different UPDATE queries to shift the rows forward and then clear the previous rows. The first query (as reproduced below) doesn't update any rows in my table and I'm drawing a blank on what I must have left out:

        -- shift slot values forward          DECLARE @firtRowToMove AS int = 3;  -- slot id of the first row to move          DECLARE @rowsToShift AS int = 4; -- number of rows forward to shift          UPDATE [SlotTable]           SET [SlotVal1] = (SELECT [SlotVal1]                            FROM [SlotTable] AS s2                            WHERE s2.[SlotId] = ([SlotId] - @rowsToShift)),              [SlotVal2] = (SELECT [SlotVal2]                            FROM [SlotTable] AS s2                            WHERE s2.[SlotId] = ([SlotId] - @rowsToShift))          WHERE [SlotId] >= (@firtRowToMove + @rowsToShift);  

Expanding a dataset based on a weight field

Posted: 27 Jul 2013 12:50 PM PDT

I have a table that contains records in an un-expanded form. Each record has an associated integer weight that essentially informs us how many times the record should be replicated in order to get the true population.

Say, I have 3 records in my table. sampn ids a unique record and weight is the frequency weight. The un-expanded dataset looks like this:

sampn   weight  attrib1 attrib2 attrib3      1       2       23      32      65      2       1       32      56      75      3       3       54      25      87  

Once expanded, the dataset will be like this (note - I removed the weight field - but this is not essential):

sampn   attrib1 attrib2 attrib3      1       23      32      65      1       23      32      65      2       32      56      75      3       54      25      87      3       54      25      87      3       54      25      87  

I have tried to do this using cursors but it is taking a really long time to execute. Is there a clever way to do this really fast? Any predefined T-SQL stored procedure that achieves this?

Update
All, Thanks for the answers! Really great learning experience! Performed the expansion operation on my dataset. Paul's auxiliary table of numbers had the best execution time.

Scaling beyond 16 concurrent connetions

Posted: 27 Jul 2013 06:30 AM PDT

I've just adopted a Postgres 9.0 database running on RHEL 6.1 on a rather beefy 256 core, 2TB system with ok drive (Hardware RAID 50).

I've started benchmarking the system using pgbench so I can get an idea of what this thing can do. The application spends most of its time reading data, so I used the simple 'select' bgbench tests with the following parameters against a data set initialized with a scaling factor of 300:

pgbench -S -j 8 -T 60  

I recorded the TPS reported from pgbench using n+8 (n = 0 to 16) clients for 16 samples. I produced the following chart from the results:

pgbench Results

My question is, why the peak at 16 concurrent connections? I ran pgbench from the same host and looked at IO, memory, CPU utilization, all the usual suspects and found nothing that stood out as being a cause. For postgresql.conf I have the following settings:

max_connections = 1024  shared_buffers = 16G  temp_buffers = 2G  work_mem =128M  maint_work_mem = 1G  

%iowait is ~3.8% w/ 16 connections, 8.2% w/ 32, and 8.9% w/ 48.

I recently saw that PG 9.2 has a new 'feature' (http://www.postgresql.org/about/featurematrix/detail/218/) labeled as 'Multi-core scalability for read-only workloads' but this seems like a red herring. I can't help but think it a configuration issue.

TIA.

Cannot Connect Using SSMS to Local SQL Server 2008 R2

Posted: 27 Jul 2013 06:26 AM PDT

About six months ago my vendor installed SQL Server 2008 R2 into a clustered pair of servers, DB01 and DB02. I've been logging on, periodically, to DB01 to audit some of the data contained in the database instance. I've been using SSMS the whole time, and for six months there have been no problems.

Suddenly yesterday, I am unable to connect using SSMS. I get the following error:

Cannot connect to DB01.

Additional information:

A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server)(Microsoft SQL Server, Error: 53.)

This is very confusing. Very little has changed since the last time I've logged in to SSMS. TCP/IP is enabled on SQL Server, as are named pipes.

One thing I know has changed is that for the CLIENT machines that hit it, the TcpTimedWaitDelay parameter has been set to 30, and the MaxUserPort has been set to 10000. But that's on the client machines only. Could there be an issue arising from from these parameters not being mirrored on the database servers? Could there be an issue arising from these two boxes sharing an IP for the clustering? The database is normally accessed through a shared IP, where DB02 takes the IP if DB01 fails.

I've considered restarting the SQL Server service but that scares the bejeezus out of me. What if this issue spreads to the application currently using the databases?

Creating and tuning a materialized view in oracle for use by Cognos BI

Posted: 27 Jul 2013 05:43 AM PDT

I am currently in the middle of a Cognos BI implementation and am having difficulty in tuning a view in order for SQL Server to select its contents for fulfilling the incremental load in a timely fashion. The data is currently taking +5 hours to move between Oracle and SQL Server (Cognos 'Staging' layer).

The source code for this scenario can be accessed via the following URL:

https://github.com/pwl91/source_code

I would like to create a materialized view that will use select * from ifsapp.fb_cust_ord_lines as its source, this will need to refresh fast (i.e. quickly) every day at 2:00. I have attempted this and below is the result:

create materialized view ifsapp.fb_bi_cust_ord_lines_mv   tablespace "IFSAPP_DATA" build immediate     using index pctfree 10 initrans 2 maxtrans 255     storage(initial 65536 next 1048576 minextents 1 maxextents 2147483645 pctincrease 0       freelists 1 freelist groups 1 buffer_pool default)   tablespace "IFSAPP_DATA" refresh fast     using default local rollback segment   as    select      *    from ifsapp.fb_cust_ord_lines    ;    create index ifsapp.fb_fb_bi_cust_ord_lines_mv_i on ifsapp.fb_bi_cust_ord_lines_mv      (        order_no,        line_no,        rel_no,        line_item_no      )      pctfree 10 initrans 2 maxtrans 255       tablespace IFSAPP_INDEX_16K  ;  

I know that this will not be 100% correct (maybe not even 50%) but I would really appreciate some assistance. Please ask if you require any further information

SQL Server: subscriber as publisher and hierarchical replication

Posted: 27 Jul 2013 03:43 PM PDT

In Oracle Streams one way replication from one DB to another is a basic block for many replication topologies (N-way, hierarchical, combined and so on), and changes could be captured and applied for the same table at the same time.

But I can't find anywhere in SQL Server documentation whether table (article) could be a source (publisher) and destination (subscriber) simultaneously. Is it possible, for example, to setup bidirectional replication using two pairs of publisher->subscriber transactional replication?

Naturally I am more interested in multi-level hierarchy: for example one table is replicated from root node through intermediate nodes to leaves, second one - from leaves to root, and third one - bidirectionally, and for all tables intermediate nodes could perform DMLs to be replicated too. With Oracle Streams it is easy to achieve, and even more sophisticated configurations are possible, but are they with SQL Server?

UPDATE: It seems it is with use of merge replication and republishing (http://msdn.microsoft.com/en-us/library/ms152553.aspx), but what about transactional replication?

Cannot see Green Button with white arrow in object explorer; DTC config problems?

Posted: 27 Jul 2013 12:43 PM PDT

I have SQL Servers across locations setup for replication. One of the SQL Server instances that I installed, running locally I can see the green button with the white arrow in the object explorer when I connect to it. However, when I connect to it from any other location, I cannot see that or neither can I start or Stop the SQL Serer Agent; even though locally I am able to do that.

Additionally, the MDTC doesn't has only 2 items sent both with were rejected, where as other servers have 100's committed and sent. Is there something wrong with the DTC settings?

Please help.

MySQL failover - Master to Master Replication

Posted: 27 Jul 2013 04:43 PM PDT

My company is trying to implement a MySQL failover mechanism, to achieve higher availability in our webservices tier - we commercialize a SaaS solution. To that end we have some low-end VMs scattered through different geographical locations, each containing a MySQL 5.5 server with several DBs, that for the time being are merely slave-replicating from the production server - the objective up until now was just checking the latency and general resilience of MySQL replication.

The plan however is to add a Master-Master replication environment between two servers in two separate locations, and these two instances would handle all the DB writes. The idea wouldn't necessarily imply concurrency; rather the intention is having a single one of the instances handling the writes, and upon a downtime situation using a DNS Failover service to direct the requests to the secondary server. After the primary comes back online, the b-log generated in the meantime in the secondary would be replicated back, and the DNS Failover restored the requests back to the first one.

I am not an experienced administrator, so I'm asking for your own thoughts and experiences. How wrong is this train of thought? What can obviously go wrong? Are there any much better alternatives? Bash away!

Thanks!

sql server-percentage calculation

Posted: 27 Jul 2013 04:32 PM PDT

Sample data :

LINE_NO E_FIELD F_FIELD G_FIELD HSA_STATUS  FAMILY  CACHE_FAMILY    Count  23053B  00000   00000   00000   S           SUMMIT      WER           43  23053B  00000   00000   00000   T           SUMMIT      WER           144  23053B  00000   00000   00684   T           SUMMIT      WER           2  23053B  00353   00418   00684   T           SUMMIT      WER           1  23053B  00353   00418   00763   T           SUMMIT      WER           1  23053B  00353   00418   01512   T           SUMMIT      WER           1  23053B  00353   00418   06797   T           SUMMIT      WER           1  23053B  00353   00418   30228   T           SUMMIT      WER           1  23053B  00353   00418   31935   T           SUMMIT      WER           2  23053B  05601   01402   00758   T           SUMMIT      WER           1  23053B  05601   01402   09091   T           SUMMIT      WER           1  23053B  05601   01402   65053   T           SUMMIT      WER           1  

This is my query:

SELECT LINE_NO,    E_FIELD,    F_FIELD,    G_FIELD,    HSA_STATUS,    FAMILY,    CACHE_FAMILY,    Count = ((SUM(TOTAL)) )    FROM  (    SELECT LINE_NO,      E_FIELD,      F_FIELD,G_FIELD,      HSA_STATUS,      FAMILY,      CACHE_FAMILY,       Count(LINE_NO) as Total     FROM TX_HSA_SUMM     WHERE MT_TIMESTAMP2 BETWEEN ('2013-03-07 10:10:00') AND ('2013-03-08 10:20:00')     GROUP BY LINE_NO,E_FIELD,F_FIELD,G_FIELD,HSA_STATUS,FAMILY,CACHE_FAMILY  ) as a  GROUP BY LINE_NO,E_FIELD,F_FIELD,G_FIELD,HSA_STATUS,FAMILY,CACHE_FAMILY,total  ORDER BY LINE_NO,E_FIELD,F_FIELD,G_FIELD,HSA_STATUS,FAMILY,CACHE_FAMILY,total  

This is my sample data actually. I already make the count. As You can see the column header. My problem is I need to sum the count and the for each row I need to get the percentage. For example the total for the above record is 199 so for the first record count is 43 so the calculation should be 43/199 * 100. How can I view the percentage? Please help me I need this urgently.

multi-master to single-slave replication at table level with PostgreSQL or MySQL

Posted: 27 Jul 2013 11:43 AM PDT

Here is my scenario

Master1 hosting DB1  Master2 hosting DB2  ...  MasterN hosting DBN    replicate to:    Slave1 hosting DB1,DB2... DBN  

I've read similar questions and they recommend to start different instances at Slave1 and simply do MasterN-Slave1(instanceN) replication, as instructed here:

Single slave - multiple master MySQL replication

That would be piece of cake, but running different instances might be a waste of resources.

I really want to achieve this with an single DBMS instance at Slave1, and if possible with PostgreSQL; but can try with MySQL if there is a solution for this.

Any help is appreciated.

Invalid File Name on login to SQLPLUS

Posted: 27 Jul 2013 09:43 AM PDT

The moment I log into SQLPLUS I get an SP2-0556 error. I recently moved OS but am connecting to the same database so I am not sure if it's because I'm using my old TNSNames or what. I can connect and work normally but a few issues have arisen with running SQL scripts due to this where they then say Invalid File Name.

Any help would be greatly appreciated, thanks.

  • the database is 11g,
  • the command I use for running scripts is a custom ant script that basically loops through and runs normal .sql scripts,
  • and I changed from a Ubuntu system to Mac OS X.

The issues are that any new scripts I create (that I know work) give a Invalid File Name error then stop.

SQL Server account delegation setup

Posted: 27 Jul 2013 10:43 AM PDT

I am try to run bulk insert command on SQL Server 2008 but I am having issues with the security. after researching via internet, the problem has something to do with account delegation and impersonation.

However all solutions seems to point to an Active Directory setup which my setup is not.

My setup is SQL Server 2008 Express on Windows Server 2008 Standard configured as a workgroup.

Despite a SQL Server user account, assigned all the user mappings and server roles and ensuring security set to SQL, I am getting this error:

Msg 4861, Level 16, State 1, Line 1
Cannot bulk load because the file "\server_name\file_name.txt" could not be opened. Operating system error code 5 (Access is denied.).

So, how does one setup SQL Server account delegation and impersonation on a workgroup environment?

Thanks in advance

Trouble setting up Nagios to monitor Oracle services

Posted: 27 Jul 2013 02:43 PM PDT

I've got an install of Nagios XI that doesn't seem to want to talk to any of my Oracle services here. I've pulled out the monitoring command and am running it manually, after setting ORACLE_HOME and LD_LIBDRARY_PATH of course, but it keeps generating the following error:

/usr/local/nagios # libexec/check_oracle_health --connect "oracle-server:1551" --username user --password "pass" --name OFFDB1 --mode tablespace-can-allocate-next --warning 20 --critical 30  CRITICAL - cannot connect to oracle-server:1551. ORA-12504: TNS:listener was not given the SERVICE_NAME in CONNECT_DATA (DBD ERROR: OCIServerAttach)  

I'm still fairly new to Oracle, but my googlings seem to indicate that SERVICE_NAME should match the GLOBAL_DBNAME in listener.ora, which is OFFDB1. Do I need to do something else here like modify the connect string?

As a note, there are multiple instances of oracle sharing the target box, but each seems to be intalled to separate partitions and are running their own listeners or various ports.

Help with SELECT queries

Posted: 27 Jul 2013 03:41 PM PDT

I need help with some queries. I struggled do find out how to do it, but I think I finally caught on or at least got the results I was looking for in the database.

These are the tables (given by my professor):

Tables

Here are the problem statements and the solutions I have come up with:

1) From that database, get supplier numbers for supplier who supply project J2, in supplier number order.

select SNO  from SPJ  Where JNO = 'J2'  Order by SNO ASC;  

2) Get part numbers for parts supplied by a supplier in LA

select DISTINCT PNO  from spj  where SNO in  (select SNO  from Suppliers  where CITY = 'LA' ) ;  

3) Get part numbers for parts supplied by a supplier in LA to a project in LA:

select PNO  from spj  where JNO in  (select JNO  from projects  where CITY in  (Select CITY  from Suppliers  Where CITY = 'LA' ));  

4) Get the total quantity of part P2 supplied by supplier S3

select SUM(QTY)  From spj  where PNO = 'P2'  and SNO = 'S3' ;  

5) for each part being supplied to a project, get the part number, the project number, and the corresponding total quantity

select PNO, JNO, QTY  from spj  

6) Get project names for project supplied by supplier S1 located in HON

Select Jname  from projects  where CITY = 'HON'  and JNO in  (Select JNO  from spj  where SNO = 'S1' );  

7) Get part numbers for parts supplied to any project in LA

select PNO  from spj  where JNO in  (Select JNO  from projects  where CITY = 'LA' );  

SQLFiddle with test data here

Can I use a foreign key index as a shortcut to getting a row count in an INNODB table?

Posted: 27 Jul 2013 05:43 PM PDT

I have a table that has a large number of rows in it.

The primary key (an auto-incrementing integer) is, by default, indexed.

While waiting for a row count to be returned I did an EXPLAIN in another window and the the results were as follows:

mysql> SELECT COUNT(1) FROM `gauge_data`;  +----------+  | COUNT(1) |  +----------+  | 25453476 |  +----------+  1 row in set (2 min 36.20 sec)      mysql> EXPLAIN SELECT COUNT(1) FROM `gauge_data`;  +----+-------------+------------+-------+---------------+-----------------+---------+------+----------+-------------+  | id | select_type | table      | type  | possible_keys | key             | key_len | ref  | rows     | Extra       |  +----+-------------+------------+-------+---------------+-----------------+---------+------+----------+-------------+  |  1 | SIMPLE      | gauge_data | index | NULL          | gauge_data_FI_1 | 5       | NULL | 24596487 | Using index |  +----+-------------+------------+-------+---------------+-----------------+---------+------+----------+-------------+  1 row in set (0.13 sec)  

Since the primary key is guaranteed to be unique, can I just take the number of rows from the EXPLAIN and use that as the row count of the table?

BTW, I believe the difference in numbers is due to the fact that more data is continually being added to this table.

Query tuning help needed (Hash Match and Table Scans)

Posted: 27 Jul 2013 03:46 PM PDT

I need help optimizing the following query (returning ~8k rows):

SELECT A.sys_id, 'AppSvrRels' = CAST(SUBSTRING((SELECT (', ' + T.name)                                  FROM (                                  SELECT A.sys_id, AppSvr.name                                  FROM GTS_DataStage.SN.cmdb_ci_appl A                                  LEFT JOIN GTS_DataStage.SN.cmdb_rel_ci X                                       ON A.sys_id = X.parent                                  LEFT JOIN GTS_DataStage.SN.cmdb_ci AppSvr                                       ON X.child = AppSvr.sys_id                                       AND AppSvr.dv_sys_class_name                                         IN ('UNIX Server', 'Windows Server',                                         'IBM Mainframe', 'Application Server')                                  WHERE A.dv_sys_class_name = 'Application'                                    AND A.dv_u_used_for = 'Production'                                    AND A.dv_operational_status = 'Deployed') T                                  WHERE A.sys_id = T.sys_id                                  FOR XML PATH('')), 3, 4000) AS NVARCHAR(4000))  FROM GTS_DataStage.SN.cmdb_ci_appl A  WHERE A.dv_sys_class_name = 'Application' AND A.dv_u_used_for = 'Production'       AND A.dv_operational_status = 'Deployed'  

These tables are on the DataStage and I am not allowed to use indexes. Total rowcounts are: cmdb_ci_appl: ~20k; cmdb_rel_ci: ~1200k; cmdb_ci: ~800k

The following image shows a section of the execution plan (I can upload the sqlplan file if needed but there's not much else to see):

Execution_Plan_Screenshot

Any help is appreciated.

Pattern matching with LIKE, SIMILAR TO or regular expressions in PostgreSQL

Posted: 27 Jul 2013 07:07 PM PDT

I had to write a simple query where I go looking for people's name that start with a B or a D :

SELECT s.name   FROM spelers s   WHERE s.name LIKE 'B%' OR s.name LIKE 'D%'  ORDER BY 1  

I was wondering if there is a way to rewrite this to become more performant. So I can avoid or and / or like?

Figuring out database timezone

Posted: 27 Jul 2013 01:27 PM PDT

Does anybody know of a way to alter a time zone on a SQL database?

Is there any tool that could find the objects using the date and time?

No comments:

Post a Comment

Search This Blog