Monday, September 2, 2013

[how to] Are there any problems with deploying an SQL Server database to a production server by taking a backup?

[how to] Are there any problems with deploying an SQL Server database to a production server by taking a backup?


Are there any problems with deploying an SQL Server database to a production server by taking a backup?

Posted: 02 Sep 2013 08:35 PM PDT

This is a slightly loaded question in that I have already assumed that the described scenario is wrong.

A DBA is deploying an application I have written that includes an MS SQL Server 2008 database. He has asked me to take a database backup from my development machine so he can restore it to the production server, thus deploying it. This is a greenfield deployment so there is no existing data to be migrated. I was expecting to provide a DDL script, which I have diligently tested and ensured that it contains everything required. If I execute it in SSMS, the database is created in one click.

To me, using the backup facility for deployment does not seem right, but without being an expert in SQL server I can't think of a solid reason not to do it. I would have thought, for example, that there would be some 'contamination' of the database from the development machine - perhaps the computer name, directory structure or user names stored in there somewhere. Is this the case, or is backup and restore a valid deployment technique?

psql, record separators in the data

Posted: 02 Sep 2013 08:08 PM PDT

I want to use psql to list all of the databases on a Postgres server, to be parsed by a script. This command lists them:

psql -l -A -t  

but the output shows an obvious issue: the records are separated by newlines, but also contain newlines.

$ psql -l -A -t  postgres|postgres|UTF8|en_CA.UTF-8|en_CA.UTF-8|  template0|postgres|UTF8|en_CA.UTF-8|en_CA.UTF-8|=c/postgres  postgres=CTc/postgres  template1|postgres|UTF8|en_CA.UTF-8|en_CA.UTF-8|=c/postgres  

postgres=CTc/postgres

Using the -R option I can change the record separator, but it seems like no matter what I change it to, there's the risk of that string appearing in the data. Is it possible to instead tell psql to replace the newlines in the data with something else? (and then what if that string also appears in the data?)

I'd also tried to set the record separator to a null character with such sequences as -R '\000' and -R "\0", but it doesn't seem to interpret escape sequences in the parameter at all, and just uses the literal string \000 instead.

The other option I know of to list all databases is:

psql --quiet --no-align --tuples-only --dbname=postgres --username=postgres --host=127.0.0.1 --port=5432 --command="SELECT datname FROM pg_database"  

but that requires me to give the password for the postgres user, so it's not desirable. Perhaps there's another way to get a list of the names of all databases?

why is this query taking so long?

Posted: 02 Sep 2013 07:34 PM PDT

+----+-------------+-----------+----------+---------+------+----------+--------------------------------------------------------+ | Id | User | Host | db | Command | Time | State | Info | +----+-------------+-----------+----------+---------+------+----------+--------------------------------------------------------+ | 39 | tigase_user | localhost | tigasedb | Query | 5406 | updating | delete from tig_pairs where uid >= 2 and uid <= 700000 | | 40 | tigase_user | localhost | tigasedb | Query | 0 | NULL | show processlist | +----+-------------+-----------+----------+---------+------+----------+--------------------------------------------------------+

It just stays in the 'updating' state for most of the 5406 seconds. Mysql version Server version: 5.1.69-0ubuntu0.11.10.1-log (Ubuntu)

thanks

I need to do test case on CDP (corporate data program) CARD create CDP Staging tables and incorporate into extract process [on hold]

Posted: 02 Sep 2013 04:39 PM PDT

Requirements:The CDP source tables are not stored in BFS_LOAD_STAGING. They are written directly to CDP_STG. This is because the cards files are deltas and we need to maintain the full dataset somewhere and CDP_STG schema was chosen as the operational store area

CDP CARD Solution:

  1. Create a set of partitioned CDP Staging tables for the cards data
  2. Create a CDP_LS_00_CARD_Master_SEQ to populate those tables new tables from the CDP_CARD_* tables
  3. Create the corresponding Framework Meta Data to support the new master sequence
  4. Create a Control-M job to trigger the new sequence. Make it dependent upon PUDQDCARDLS, the Control-M job that processed the Cards files
  5. Modify all the CARD Extract jobs to use the new partitioned CDP Staging tables Show

Testing:

  1. Run the CDP_LS_00_CARD_Master_SEQ DataStage job to populate the CDP Staging tables
  2. Run the CDP_CARD_00_Master_SEQ DataStage job to populate the CDP Extract tables
  3. Check that the new process produces the same results in the CDP Extract tables

Joining data from 2 different servers

Posted: 02 Sep 2013 01:57 PM PDT

It is possible to host my databases on 2 different servers?

For example if I store my comments on 'heroku' and store users on some other server it is possible to join the two servers together?

How to index for a group by with columns from different tables?

Posted: 02 Sep 2013 01:46 PM PDT

Is it possible to create an index from columns of 2 different tables?
If not, how do we optimize a query that does a group by on columns of 2 different tables?

For example: the following statement

explain select p.name from Person p join JobStatus js    group by p.name,js.job_tickets    order by p.name,p.lastname, pm.job_tickets;    

Uses indexes if I have for (p.name,p.lastname) and for job_tickets.
But I have noticed that on other queries and with more than one joining table the index is omitted.

Benefit to keeping a record in the database, rather than deleting it, for performance issues?

Posted: 02 Sep 2013 02:57 PM PDT

So I have a client that I am building a Rails app for....I am using PostgreSQL.

He made this comment about preferring to hide records, rather than delete them, and given that this is the first time I have heard about this I figured I would ask you guys to hear your thoughts.

I'd rather hide than delete because deletions in tables eventually lead to table index havoc that causes queries to take longer than expected (much worse than Inserts or Updates). This won't be a problem in the beginning of the site (it gets exponentially worse over time), but seems like an easy issue to never encounter by just not deleting anything (yet) as part of the "everyday" web application functionality. We can always handle deletions much later as part of a Data Optimization & Maintenance process and re-index tables in that process on some (yet to be determined) scheduled basis.

In all the Rails apps I have built, I have never had an issue with records being deleted and it affecting the index.

Am I missing something? Is this a problem that used to exist, but modern RDBMS products have fixed it?

Looking for performant database-design for large data sets modeling availability problem [on hold]

Posted: 02 Sep 2013 05:37 PM PDT

I am looking for a performant database-design to store and query information about location disposability and other attributes of a shared "good" in a online-sharing community to be designed (based on drupal + mysql). Due to a NDA I'll take a care sharing community as example:

Users can rent cars provided by other users. The cars have attributes and disposability that needs to be stored and queried. Processing general attributes and locations is not the problem, also the proximity is not that difficult, but querying all available cars seems to be tough.

A few examples for possible queries:

  1. get all cars available now
  2. get all cars available tomorrow from 8am to 2:30pm
  3. get all cars continuously available from 2013-10-01 until 2013-10-30
  4. get all cars available each Tuesday to Thursday from 10am to 6pm from 2013-11-01 until 2013-11-24

Cars that aren't available should not appear in the query result, but cars that are available in the query range (but maybe not on other times/dates) should appear, because they are available to rent.

The project will be huge. There will be some hundred thousands users and cars and millions of contracts.

Any ideas how to store the availability or dis-availability in Database?

One idea is to use an RDBMS (MySQL) with three database tables, one for the:

  1. cars (with all car informations),
  2. booking-contracts (with all booking informations),
  3. times where the cars aren't available (redundant information, as information is already contained in compact form in booking table (2), but required to query availability).

The date/time where cars aren't available could be split in to one line per day and contract, like (BOOKING_ID in this example matches the query number of above query example; the booking table will refer to car table so all information is connected):

BOOKING_ID | DATE_TIME_FROM   | DATE_TIME_UNTIL  -----------------------------------------------  1          | 2013-09-02 08:00 | 2013-09-02 10:00  2          | 2013-09-03 08:00 | 2013-09-03 14:30  3          | 2013-10-01 08:00 | 2013-10-30 08:00  4          | 2013-11-05 10:00 | 2013-11-05 18:00  4          | 2013-11-06 10:00 | 2013-11-06 18:00  4          | 2013-11-07 10:00 | 2013-11-07 18:00  4          | 2013-11-12 10:00 | 2013-11-12 18:00  4          | 2013-11-13 10:00 | 2013-11-13 18:00  4          | 2013-11-14 10:00 | 2013-11-14 18:00  4          | 2013-11-19 10:00 | 2013-11-19 18:00  4          | 2013-11-20 10:00 | 2013-11-20 18:00  4          | 2013-11-21 10:00 | 2013-11-21 18:00  

To query for available cars I "just" need to query for the general attributes and join above table and check that there are no bookings in desired period of time. The Downside is, that when there are many cars and bookings and even more not-available dates the joins will be really large and I guess it will get very slow soon. I am not sure if the concerns about SQL (MySQL) are justified or is it just paranoia triggered by the increased popularity of NoSQL DBMS?

Although the drupal-based website primarily uses MySQL, an alternative approach might be to duplicate the information for cars and bookings using MongoDB and have a document for each car including the booking information, like below example. I am not sure if this example it is correct and nigher how to query it efficiently, as I've not jet worked with MongoDB. The idea behind using MongoDB came up while I was thinking about using ApacheSOLR to search for available cars. While Solr could be seen as document based storage I guess MongoDB would be similar but maybe with less overhead, because there is no text-based search required.

{    car_id: 1,    type: 'SUV',    brand: 'BMW',    model: 'X5 E70',    color: 'black',    seats: 5,    power: '210',    from: '2013-09-01 00:00',    until: '2013-12-01 24:00',    bookings:[                {                  id: 1,                  from: '2013-09-02 08:00',                  until: '2013-09-02 10:00',                },                {                  id: 2,                  from: '2013-09-02 08:00',                  until: '2013-09-02 14:30',                },                {                  id: 3,                  from: '2013-10-01 08:00',                  until: '2013-10-30 08:00',                },                {                  id: 4,                  from: '2013-11-05 10:00',                  until: '2013-11-05 18:00',                },                {                  id: 4,                  from: '2013-11-06 10:00',                  until: '2013-11-06 18:00',                },                {...}                {                  id: 4,                  from: '2013-11-21 10:00',                  until: '2013-11-21 18:00',                },             ]  }  

I think on both approaches I'll run a daily jobs to clean out non relevant (outdated past booking) data to keep tables/collections as small as possible.

Would be nice to get answers an comments about above approaches. Will one or the other work fine? Are there better ways/models to handle this problem?

CHECKPOINT prevents database drop

Posted: 02 Sep 2013 01:28 PM PDT

I'm working in the following scenario: I'm creating an script that recreates an entire database. The script involves destruction an recreation of the database and all its objects, as well as a great number of insertions from linked servers. The idea is to be able to recreate the database at will until the development phase ends. My problem is that the script generates high log activity, and soon a CHECKPOINT occurs. I'm killing all connections in my script prior to drop the database, but as CHECKPOINT is a system process I can't kill it, and my script fails. I don't need recovery for this database at this stage, my question is if I can disable the logs for this database to avoid the CHECKPOINT.

When to make changes to the cost threshold for parallelism

Posted: 02 Sep 2013 10:35 AM PDT

While examining a performance issue , I have seen an influx on CXPACKETS suggesting I might need to look at the cost threshold for parallelism and perhaps the MAXDOP.

Before making any drastic changes to the MAXDOP I have following the advice of many others including that of @mrdenny in the answer to CXPACKET Waits performance tune for SQL Server 2008 and @aron-Bertrand 's answer from Dealing with CXPACKET waits - setting cost threshold for parallelism. I have added to the maintenance to update the statistics fully on a nightly basis. This feels like a sensible move.

However, making modifications to the cost threshold is still something which niggles me.

At what point should the cost threshold for parallelism be altered? Does any one have an example of where (after examining the cost of their queries and workload) they made change to this cost?

Apologizes if this is something which which has been answered in a previous question.

Thanks!

how to configure the mysql master&slave to backup master to slave?

Posted: 02 Sep 2013 07:20 PM PDT

I have compelet backup one database from master to slave now i want to back up two databases from master to slave .but use the same ways it doesnt work,what should i do thank u foe you help? I've complete backup one database between master & slave,now I want to add another database to backup? How Can I do that? the master's my.cnf

and this is my slave's conf

[mysqld]

master configure

server-id = 1
log-bin=mysql-bin
binlog-do-db=webdb,just_test binlog-ignore-db=mysql

master configure

datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql

Disabling symbolic-links is recommended to prevent assorted security risks

symbolic-links=0

[mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid

and this is my slave conf

[mysqld]

configure master-slave

server-id=2
replicate-ignore-db=mysql
replicate-do-db=webdb,just_test

configure master-slave

datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql

Disabling symbolic-links is recommended to prevent assorted security risks

symbolic-links=0

[mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid

and the backup method is like this; in master i do this mysql> GRANT REPLICATION SLAVE ON . TO slave@192.168.74.236 IDENTIFIED BY 'hello';
Query OK, 0 rows affected (0.00 sec)

mysql> GRANT FILE,SELECT,REPLICATION SLAVE ON . TO slave@192.168.74.236 IDENTIFIED BY 'hello';
Query OK, 0 rows affected (0.00 sec)

mysql> FLUSH TABLES WITH READ LOCK;
Query OK, 0 rows affected (0.00 sec)

then cd /var/lib/mysql and tar -cvf data.tar web_db just_test (web_db & just_test is the data fold which i want to back up) the scp data.tar to slave and tar -xvf and .... back to master and do mysql> UNLOCK TABLES;

then restart mysqld service both in master & slave the run the cmd flowing in master

mysql> SHOW SLAVE STATUS\G
Empty set (0.00 sec)

mysql> SHOW MAster STATUS\G
************* 1. row *************
File: mysql-bin.000001
Position: 106
Binlog_Do_DB: webdb
Binlog_Ignore_DB: mysql
1 row in set (0.00 sec)

then run following cmd in slave

[root@Slave mysql]# mysql -h 127.0.0.1 -u root -phello
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.1.69 Source distribution

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> CHANGE MASTER TO MASTER_HOST='192.168.74.235',MASTER_USER='slave',MASTER_PASSWORD='hello',MASTER_LOG_FILE='mysql-

bin.000001';
Query OK, 0 rows affected (0.02 sec)
mysql> start slave

and this is the slave status

mysql> show SLAVE status\G;  

************* 1. row ************* Slave_IO_State: Waiting for master to send event Master_Host: 192.168.74.235 Master_User: slave Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000001 Read_Master_Log_Pos: 106 Relay_Log_File: mysqld-relay-bin.000002 Relay_Log_Pos: 251 Relay_Master_Log_File: mysql-bin.000001 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: webdb,just_test Replicate_Ignore_DB: mysql Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 106 Relay_Log_Space: 407 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: 1 row in set (0.00 sec

and this is the error log

130829 15:39:49 [Note] Error reading relay log event: slave SQL thread was killed 130829 15:39:49 [Note] Slave I/O thread killed while connecting to master 130829 15:39:49 [Note] Slave I/O thread exiting, read up to log 'FIRST', position 4 130829 15:39:51 [Note] 'CHANGE MASTER TO executed'. Previous state master_host='192.168.74.235', master_port='3306',

master_log_file='', master_log_pos='4'. New state master_host='192.168.74.235', master_port='3306', master_log_file='mysql-

bin.000001', master_log_pos='4'. 130829 15:39:56 [ERROR] Slave I/O: error connecting to master 'slave@192.168.74.235:3306' - retry-time: 60 retries: 86400,

Error_code: 2013 130829 15:39:56 [Note] Slave SQL thread initialized, starting replication in log 'mysql-bin.000001' at position 4, relay log

'./mysqld-relay-bin.000001' position: 4 130829 15:41:56 [Note] Slave I/O thread: connected to master 'slave@192.168.74.235:3306',replication started in log 'mysql-

bin.000001' at position 4 130829 15:42:20 [Note] Error reading relay log event: slave SQL thread was killed 130829 15:42:20 [Note] Slave I/O thread killed while reading event 130829 15:42:20 [Note] Slave I/O thread exiting, read up to log 'mysql-bin.000001', position 106 130829 15:42:23 [Note] 'CHANGE MASTER TO executed'. Previous state master_host='192.168.74.235', master_port='3306',

master_log_file='mysql-bin.000001', master_log_pos='106'. New state master_host='192.168.74.235', master_port='3306',

master_log_file='mysql-bin.000001', master_log_pos='106'. 130829 15:42:27 [Note] Slave SQL thread initialized, starting replication in log 'mysql-bin.000001' at position 106, relay log

'./mysqld-relay-bin.000001' position: 4 130829 15:42:27 [Note] Slave I/O thread: connected to master 'slave@192.168.74.235:3306',replication started in log 'mysql-

bin.000001' at position 106

what should i do ? my english is poor,maybe i dont explain my meaning.Thank U for ur help!!!!

opinion about the migration to hibernate orm [on hold]

Posted: 02 Sep 2013 11:41 AM PDT

We are investigating the use of Hibernate ORM as a replacement to traditional DAO using query in a java erp system, currently we have:

  1. 128 classes for load data using custom queries
  2. 120 classes for save or update on a table (using direct mapping)
  3. 132 classes for lists using database views
  4. several reports receiving JDBC connection and use their query (jasper report)

And we have the following questions:

  1. It is advisable to make this change?
  2. How it affects the speed of the system?

Database design with multiple tables

Posted: 02 Sep 2013 12:53 PM PDT

I have a data structure as follows:

Two main components: Projects and Results

So I have a table called Project which has multiple fields as Id, Name, Title, StarDate, EndDate and many more which correspond to a project.

And each project can have many Results. So first I thought to make a results table but there is a problem. I have many types of results (around 15) each of them not having too many common fields. So then I thought to make a separate table for each type of result but then it will be very hard to get all the results for a project because they can be in any of the result tables.

So to be more clear I have the following data:

For Project: ProjectID, UserID, Type, Title, Summary, StartDate, EndDate, Value, Website

For results I have the following types:

  • Book: Title, Authors, Year, publisher, city, number_pages
  • Book Chapter: [All the fields from book] + chapter title, number of pages for chapter, pagination
  • Article: type, title, authors, magazine, ISSN, year, pagination
  • Thesis: title, author, coordinator, year
  • Citation: category, number of citations
  • Events: type, name, period, members
  • Brevets: author, name, holder, institution, number, type
  • And five who have the exact same structure (Tehnology, methods, products...): name, authors
  • And a distinct catergory: Others

How can I structure the database for it to work in the best way.

Why my query works on VIEW and doesn't work on a similar table?

Posted: 02 Sep 2013 10:12 AM PDT

I have a view openstreetmapview that combines two tables with different columns. The first table is openstreetmap and the second is B.

When I execute this query on the view:

select this_.gid             as y0_       , this_.openstreetmapId as y1_       , this_.name            as y2_       , this_.streetType      as y3_       , this_.oneWay          as y4_       , this_.location        as y5_       , this_.isIn            as y6_       , this_.countryCode     as y7_       , length                as y8_       , distance_sphere(             GeometryFromText(                 'POINT(11.059734344482422 46.07373046875)',4326)               , ST_line_interpolate_point(                         this_.shape                        , st_line_locate_point(                               this_.shape                             , GeometryFromText('POINT(11.059734344482422 46.07373046875)', 4326)                        )                  )         ) as y9_       , ST_AsText(ST_ClosestPoint(                 this_.shape,GeometryFromText( 'POINT(11.059734344482422 46.07373046875)', 4326)         )) as y10_   from OpenStreetMapView this_ ;  

Then, I get around 50k results. When I execute the same query on the table even though the table has the same column that the query needs, it returns 0 rows. Why is that?

I use PostgreSQL 8.4 database and PostGIS .

Recovering a dropped column?

Posted: 02 Sep 2013 09:45 AM PDT

I have a PostgreSQL table:

CREATE TABLE essays (      id serial primary key,      content text  );  

I inserted a few rows, then ran:

ALTER TABLE essays DROP COLUMN content;  

How can I recover the data? I'm willing to use any means.

Data center from scratch which way to go

Posted: 02 Sep 2013 11:05 AM PDT

My budget for a data center (hardware,OS licences, SQL Licences WithOut CALS ) is around $50K~$60K.

I have needs for around 200 users and my apps works only works in Microsoft environment (Window Server and SQL Server).

I've talked to a lot of "so called" experts and every one had different vision for my data center, as much I talking with vendors I am more puzzled how I should design my data center, perhaps this is because I did not meet yet any SQL Server expert who knows what to do.

My living area is so poor whit IT Specialist.

My app is not mission critical and We can afford down time up to 1 working day (8 hours). I now have 1 main database (80GB-DATA and 20GB of SQL Server log) for writing transaction
That database is transactional replicated by SQL Server to second physical SQL Server web servers from there reading large reports.

I have around 200 users which over WEB server communicating with my SQLs

Here is what I considered so far to do.

  1. First scenario

    3X Physical servers (no storage, no SAN, no Virtualization) one server will act as Web Server other two is going to be SQL server whit 2012 standard edition each server will have 96GB ram and Intel Xeon 6C Processor 2Hz. In this scenario I can afford IBM enterprise SSD discs I can have at TWO SQL servers 8 X 250 GB ssd SATA disc. First server will replicate database to other.

  2. Second scenario

    2X physical servers one IBM or HP storage with 10~14 pieces of 600GB 10K SAS disc in storage. All of that virtualized by vmware and running on one SQL Standard Server. without SQL replication ,but whit 200GB of ram in SQL-os.

Personally I do not have experience with virtualization and storage systems. I more like SQL Server replication.

Here is some questions which I seek to have direction which way to go.

Should I avoid virtualization for machines which is dedicated to running SQL Server ?

Is SQL Server 2012 ready for production? I know that this is perhaps silly question but one of IT guys with plenty of certificates say that they always running production instances on second to last Microsoft platform, I am really confused how is He strong in claiming that they not going to instal and configure MS Servers (OS, SQL Server) on 2012 generation.

Do I need SSDs beside plenty of RAM. Unfortunately I can't do much in redesigns queries!

Here is what I have captured during 3 hour monday peek time.

Reporting Database (replication subscriber)

Replication

This is transactional DB (replication publisher) Trasaction DB

Selecting with multiple ands

Posted: 02 Sep 2013 05:52 PM PDT

I am having a little trouble wrapping my head around filtering a query in the way that I need. My current sql:

SELECT  `t0`.`id` AS `t0_c0`, `t0`.`language_id` AS `t0_c1`, `t0`.`attribute_assignment_id` AS `t0_c2`, `t0`.`data` AS `t0_c3`, `t0`.`slug` AS `t0_c4`, `t1`.`id` AS `t1_c0`, `t1`.`attribute_id` AS `t1_c1`, `t1`.`entity_id` AS `t1_c2`, `t2`.`id` AS `t2_c0`, `t2`.`type` AS `t2_c1`, `t3`.`id` AS `t3_c0`, `t3`.`language_id` AS `t3_c1`, `t3`.`attribute_id` AS `t3_c2`, `t3`.`name` AS `t3_c3`, `t3`.`slug` AS `t3_c4`    FROM `attribute_assignment_i18n` AS `t0`  LEFT JOIN `attribute_assignment` AS `t1` ON (`t0`.`attribute_assignment_id` = `t1`.`id`)  LEFT JOIN `attribute` AS `t2` ON (`t1`.`attribute_id` = `t2`.`id`)  LEFT JOIN `attribute_i18n` AS `t3` ON (`t2`.`id` = `t3`.`attribute_id`)    WHERE  `t1`.`entity_id` IN ('product_336', 'product_337', 'product_338', 'product_339', 'product_340', 'product_341', 'product_342', 'product_343', 'product_344', 'product_345')    --There can be any number of these ANDs  AND `t3`.`slug` in ('country', 'type')    AND `t0`.`slug` IN ('rose', 'france')  

Which produces a result like this:

+-------+-------+-------+--------+--------+-------+-------+-------------+-------+--------------+-------+-------+-------+---------+---------+  | t0_c0 | t0_c1 | t0_c2 | t0_c3  | t0_c4  | t1_c0 | t1_c1 | t1_c2       | t2_c0 | t2_c1        | t3_c0 | t3_c1 | t3_c2 | t3_c3   | t3_c4   |  +-------+-------+-------+--------+--------+-------+-------+-------------+-------+--------------+-------+-------+-------+---------+---------+  |     1 | en    |     1 | France | France |     1 | 1     | product_342 |     1 | default_text |     1 | en    |     1 | Country | country |  |    11 | en    |    11 | France | France |    11 | 1     | product_343 |     1 | default_text |     1 | en    |     1 | Country | country |  |    31 | en    |    31 | France | France |    31 | 1     | product_345 |     1 | default_text |     1 | en    |     1 | Country | country |  |    32 | en    |    32 | Rose   | Rose   |    32 | 2     | product_345 |     2 | default_text |     2 | en    |     2 | Type    | type    |  |    41 | en    |    41 | France | France |    41 | 1     | product_341 |     1 | default_text |     1 | en    |     1 | Country | country |  |    51 | en    |    51 | France | France |    51 | 1     | product_340 |     1 | default_text |     1 | en    |     1 | Country | country |  |    61 | en    |    61 | France | France |    61 | 1     | product_337 |     1 | default_text |     1 | en    |     1 | Country | country |  |    71 | en    |    71 | France | France |    71 | 1     | product_338 |     1 | default_text |     1 | en    |     1 | Country | country |  |    81 | en    |    81 | France | France |    81 | 1     | product_339 |     1 | default_text |     1 | en    |     1 | Country | country |  +-------+-------+-------+--------+--------+-------+-------+-------------+-------+--------------+-------+-------+-------+---------+---------+  

I have also tried wrapping the end ANDs in a container AND as well as using ORs:

AND(`t3`.`slug` = 'country' AND `t0`.`slug` IN ('france'))  OR(`t3`.`slug` = 'type' AND `t0`.`slug` IN ('rose'))  

What I need is to be able to return the product identifiers that match both the last two ANDs only.

+-------+-------+-------+--------+--------+-------+-------+-------------+-------+--------------+-------+-------+-------+---------+---------+  | t0_c0 | t0_c1 | t0_c2 | t0_c3  | t0_c4  | t1_c0 | t1_c1 | t1_c2       | t2_c0 | t2_c1        | t3_c0 | t3_c1 | t3_c2 | t3_c3   | t3_c4   |  +-------+-------+-------+--------+--------+-------+-------+-------------+-------+--------------+-------+-------+-------+---------+---------+  |    31 | en    |    31 | France | France |    31 | 1     | product_345 |     1 | default_text |     1 | en    |     1 | Country | country |  |    32 | en    |    32 | Rose   | Rose   |    32 | 2     | product_345 |     2 | default_text |     2 | en    |     2 | Type    | type    |  +-------+-------+-------+--------+--------+-------+-------+-------------+-------+--------------+-------+-------+-------+---------+---------+  

The number of these ANDs can change, eg, there can be 1 or 9 of these extra ands.

How to -create- performance issues? - oracle [on hold]

Posted: 02 Sep 2013 11:43 AM PDT

I want to dedicate some time to learn more about performance and tuning issues. I assign for that a clean DB. I want to know how can I load into it some data and performance-problem queries/DML/DDL? Do you know some scripts that can cause/generate that (purposely or not)?

The idea is to learn the basic(and maybe more) of em/awr/addm etc - tuning and performance wise. I dont really know what specific to ask, but I'm guessing index problems, selects issues, wrong way to access the data, etc..

Books are great but I have the knowledge I need some actual work on that.

Maybe its not a Q&A at all, but for me any answer (as long as it is an answer to my question) will do.

EDIT I dont want just a server load script - because thats has no point, plus thats not what I want.

Access denied when disabling agent job, despite SqlAgentOperator membership

Posted: 02 Sep 2013 07:58 PM PDT

I am attempting to disable a SQL agent job from an account which a member of the SqlAgentOperator role (but not sysadmin)

The doco says this should be possible

3 SQLAgentOperatorRole members can enable or disable local jobs they do not own by using the stored procedure sp_update_job and specifying values for the @enabled and the @job_id (or @job_name) parameters. If a member of this role specifies any other parameters for this stored procedure, execution of the procedure will fail.

...and indeed the code for msdb.dbo.sp_update_job appears to support this - it has explicit handling for exactly this scenario. Yet if I run:

use [MSDB]  go  select suser_sname()  if(is_member('SqlAgentOperatorRole')<>1)      raiserror('Not in required role',16,1)  go  exec dbo.sp_update_job @job_name='a job', @enabled=0  

... I get the following error:

Msg 229, Level 14, State 5, Procedure sp_update_job, Line 1 The EXECUTE permission was denied on the object 'sp_update_job', database 'msdb', schema 'dbo'.

The error message appears to indicate the proc's not even being run, but even granting that user explicit EXECUTE on that stored proc doesn't seem to fix it.

Does anyone know how to grant a user the ability to disable/enable SQL agent jobs that they don't own, or has successfully used this functionality. Maybe it's just broken in SQL 2012 SP1 CU4 (which is what I am using)

pg_dump format => custom vs sql

Posted: 02 Sep 2013 03:48 PM PDT

There are 2 main formats for pg_dump: custom vs sql. For custom, it's compressed by default, and not readable if you try to open it. But is it faster to dump into this format as opposed to SQL format. Or is the speed the same?

How to make Postgres autovacuum not impact performance?

Posted: 02 Sep 2013 03:49 PM PDT

I am running into problems where Postgres autovacuum processes are interfering with the overall performance of my database. It is making my select, insert, and update queries slower than usual. What are some settings, and optimization strategies that will make autovacuum have less of an impact on my database performance?

Is there a way to make it not consume as much memory/CPU so that client queries can run more efficiently?

Why doesn't running a full sample *always* improve query performance

Posted: 02 Sep 2013 02:42 PM PDT

Conventional wisdom would suggest that running a full sample of statistics would provide SQL with the optimum information to make the best decisions when forming a query plan for execution and therefore the best performance.

However, I have a number of queries (produced by Business Objects) which prefer a sample below 100%. The exact percent required for sample varies for success. What I want to know (but am struggling to find online) is why is why a 100% sample doesn't produce the best performance. Index maintenance is regular, every night following the 'Ola Hallengren' method.

The nature of the query makes it difficult to post (contains sensitive information) but those who are familiar with business objects queries will know they can be unwieldy. . .rather unusual in their construction, I often think its their setup which pokes the performance.

Thanks!

How to run a SELECT query within while loop in PHP?

Posted: 02 Sep 2013 05:18 PM PDT

Within a SELECT query via PHP while loop, I need to run a mysql SELECT query as

$result1 = $mysqli->query("SELECT * FROM table1");    while ( $row = $result->fetch_assoc() ) {    if ( $row['X'] == 'X' ) {  $result2 = $mysqli->query("SELECT * FROM table2");  .....  }    }  

but this does not work. I cannot JOIN the tables, as the if statement is complicated to perform SELECT from different tables.

Oracle 11g http listener configuration for PL/SQL server pages

Posted: 02 Sep 2013 08:53 AM PDT

Could please someone point me how to configure oracle db to be able to display PL/SQL Server pages. I have successfully created and mapped dad with the DBMS_EPG.

Here is the listener:

# listener.ora Network Configuration File: D:\app\Ja\product\11.2.0\dbhome_1\NETWORK\ADMIN\listener.ora  # Generated by Oracle configuration tools.    LISTENER =    (DESCRIPTION_LIST =      (DESCRIPTION =        (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))      )      (DESCRIPTION =        (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))      )      (DESCRIPTION =        (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 8080))      )    )    ADR_BASE_LISTENER = D:\app\Ja  

When I want to enter

localhost:8080/my_dad/home

I get error: No data received.

How do I solve this?

MySql one time event never runs?

Posted: 02 Sep 2013 01:18 PM PDT

Please have a look at below events

1) create EVENT Test1 ON SCHEDULE AT CURRENT_TIMESTAMP + INTERVAL 20 second ON COMPLETION PRESERVE ENABLE DO ...     2) create EVENT Test2 ON SCHEDULE EVERY 20 SECOND STARTS CURRENT_TIMESTAMP ON COMPLETION PRESERVE ENABLE DO ...   

I expect event Test1 to run one time after 20 seconds but it never runs. Event Test2 is working fine.

Any idea? Thanks.

Ok sorry it is the alter that is not working

At first i did create EVENT Test1 ON SCHEDULE AT CURRENT_TIMESTAMP + INTERVAL 20 second ON COMPLETION PRESERVE ENABLE DO

then shortly after i did alter EVENT Test1 ON SCHEDULE AT CURRENT_TIMESTAMP + INTERVAL 20 second ON COMPLETION PRESERVE ENABLE DO

Expect event Test1 to run again in another 20 secs but it didn't.

Getting 'specified network password is not correct' when trying to change password

Posted: 02 Sep 2013 09:00 AM PDT

After changing my network password, I went straight to the SQL Server Configuration Manager (SQL Server 2008) to update the passwords for the SQL Server and SQL Agent services. (Background info: the services run under my domain account.)

However, I kept getting the below error after clicking on apply:

The specified network password is not correct. [0x80070056]  

I guarantee the password entered was definitely the correct one because I have restarted my workstation and used it to log on successfully. Most answers on the web tend to focus on the password but I know that's just the symptom not the root cause.

I know it is difficult to answer questions like this one unless you have experienced it first-hand. So I am asking if anyone else has seen this behavior before and what they did to resolve it.

Slow SSRS Report in production

Posted: 02 Sep 2013 12:56 PM PDT

I have an SSRS report which gets its data by firing a series of stored procedures.

Now the report is timing out big time when run in production, yet when I pull down the prod database and restore to development the report runs fine.

I was thinking to set up a sql server profiler trace in production and hopefully that will tell me something... eg high Disk I/O at the time it's being run.

What else should I be doing? Something with perfmon?

How to run a cold backup with Linux/tar without shutting down MySQL slave?

Posted: 02 Sep 2013 04:18 PM PDT

I run the following before tar-ing up the data directory:

STOP SLAVE;  FLUSH TABLES WITH READ LOCK;  FLUSH LOGS;   

However, tar will sometimes complain that the ibdata* and ib_logfiles* files are updated during the process. What am I missing?

The slave machine is in a cold standby machine so there are no client processes running while tar is running.

CentOS release 5.6 64bits, MySQL 5.1.49-log source distribution.

No comments:

Post a Comment

Search This Blog