Thursday, July 11, 2013

[how to] What can I do to repair a corrupted partitioned table on the slave?

[how to] What can I do to repair a corrupted partitioned table on the slave?


What can I do to repair a corrupted partitioned table on the slave?

Posted: 11 Jul 2013 07:59 PM PDT

  • mysql Ver 14.14 Distrib 5.5.31, for Linux (x86_64) using readline 5.1

The slave has been stopped due to power outage:

              Master_Log_File: mysql-bin.000491            Read_Master_Log_Pos: 132257162                 Relay_Log_File: mysqld-relay-bin.001088                  Relay_Log_Pos: 208318295          Relay_Master_Log_File: mysql-bin.000444               Slave_IO_Running: Yes              Slave_SQL_Running: No                Replicate_Do_DB:             Replicate_Ignore_DB:              Replicate_Do_Table:          Replicate_Ignore_Table:         Replicate_Wild_Do_Table:     Replicate_Wild_Ignore_Table:                      Last_Errno: 1594                     Last_Error: Relay log read failure: Could not parse relay log event entry.   The possible reasons are:   the master's binary log is corrupted (you can check this by running 'mysqlbinlog' on the binary log),   the slave's relay log is corrupted (you can check this by running 'mysqlbinlog' on the relay log),   a network problem, or a bug in the master's or slave's MySQL code.   If you want to check the master's binary log or slave's relay log,   you will be able to know their names by issuing 'SHOW SLAVE STATUS' on this slave.                   Skip_Counter: 0            Exec_Master_Log_Pos: 208318149                Relay_Log_Space: 12525464864                Until_Condition: None                 Until_Log_File:                   Until_Log_Pos: 0             Master_SSL_Allowed: No             Master_SSL_CA_File:              Master_SSL_CA_Path:                 Master_SSL_Cert:               Master_SSL_Cipher:                  Master_SSL_Key:           Seconds_Behind_Master: NULL  Master_SSL_Verify_Server_Cert: No                  Last_IO_Errno: 0                  Last_IO_Error:                  Last_SQL_Errno: 1594                 Last_SQL_Error: Relay log read failure: Could not parse relay log event entry.   The possible reasons are:   the master's binary log is corrupted (you can check this by running 'mysqlbinlog' on the binary log),   the slave's relay log is corrupted (you can check this by running 'mysqlbinlog' on the relay log),   a network problem, or a bug in the master's or slave's MySQL code.   If you want to check the master's binary log or slave's relay log,   you will be able to know their names by issuing 'SHOW SLAVE STATUS' on this slave.    Replicate_Ignore_Server_Ids:                Master_Server_Id: 647  

This can be fixed by running:

  1. STOP SLAVE
  2. SHOW SLAVE STATUS
  3. Note down 'Relay_Master_Log_File' and 'Exec_Master_Log_Pos' entries.
  4. RESET SLAVE
  5. CHANGE MASTER TO ..... (use MASTER_LOG_FILE=relay_master_log_file and MASTER_LOG_POS=exec_master_log_pos from Step 3)
  6. START SLAVE

Then I got another error:

              Master_Log_File: mysql-bin.000727            Read_Master_Log_Pos: 97824193                 Relay_Log_File: mysqld-relay-bin.000197                  Relay_Log_Pos: 158825847          Relay_Master_Log_File: mysql-bin.000510               Slave_IO_Running: Yes              Slave_SQL_Running: No                Replicate_Do_DB:             Replicate_Ignore_DB:              Replicate_Do_Table:          Replicate_Ignore_Table:         Replicate_Wild_Do_Table:     Replicate_Wild_Ignore_Table:                      Last_Errno: 1696                     Last_Error: Error executing row event: 'Failed to read from the .par file'                   Skip_Counter: 0            Exec_Master_Log_Pos: 158825701                Relay_Log_Space: 57756920922                Until_Condition: None                 Until_Log_File:                   Until_Log_Pos: 0             Master_SSL_Allowed: No             Master_SSL_CA_File:              Master_SSL_CA_Path:                 Master_SSL_Cert:               Master_SSL_Cipher:                  Master_SSL_Key:           Seconds_Behind_Master: NULL  Master_SSL_Verify_Server_Cert: No                  Last_IO_Errno: 0                  Last_IO_Error:                  Last_SQL_Errno: 1696                 Last_SQL_Error: Error executing row event: 'Failed to read from the .par file'    Replicate_Ignore_Server_Ids:                Master_Server_Id: 647  

I guess that it's because the ALTER TABLE DROP PARTITION; is being replicated, similar to this thread.

Is there any way to repair this table and make the slave thread continues to run instead of start from beginning?

PostgreSQL : Restrict (another) superuser from accessing database

Posted: 11 Jul 2013 07:41 PM PDT

I'm new in PostgreSQL and looking to find if this is possible.

There are two superusers, 'postgres' (which is default superuser from PostgreSQL) and 'super2'.

Superuser 'super2' creates a new database named 'Movies' and being the owner of it.

Is it possible to restrict superuser 'postgres' to access the database 'Movies' so that only superuser 'super2' could access it, because he is the owner of the database.

If it is possible, then how to make it happen?

What is a good way to replicate code across multiple databases?

Posted: 11 Jul 2013 08:24 PM PDT

SQL Server 2005

I have two databases (Source1 and Source2) with the same data model and I plan to merge the data into a third database (Target) with an "abstracted" data model. In the future, it's possible to have additional Source databases that will need to be merged into Target.

Originally, I was hoping to build views and stored procedures that would reside in the Target database that I could dynamically run again Source1 to load Target then switch to Source2. That doesn't seem to be possible without dynamic SQL.

So plan B is to build the views and procedures in the Target database then deploy them to Source1 and Source2. What is the best way to replicate code to multiple databases?

I'm open to any new ideas that address this situation.

Polling for new records in a table

Posted: 11 Jul 2013 04:06 PM PDT

We have a table that is append+read only. We'd like to have multiple applications consume all new rows that show up in that database, in a cheap and easy way (without change tracking).

The simple suggestion of having each app record the last ID doesn't seem safe. Suppose 3 connections (2 writers and 1 readers) do this:

W1: BEGIN TRAN, INSERT 1000 Records (IDs 1-1000)  W2: BEGIN TRAN, INSERT 1000 Records (IDs 1001-2000)  W2: COMMIT  R1: SELECT * WHERE Id > 0 (last sync value)  W1: COMMIT (or possibly ROLLBACK)  

In this case, the reader would get records 1001-2000 and save 2000 as the highest ID. Obviously that's not correct. Assuming the various writers are running at varying speeds (so we can never assume they'll commit in order, due to latency or perf), what options are there?

Must we resort to something like change tracking, or require every application to save the last (batch*writers*x) IDs and re-read them to make sure they didn't miss any?

Rollback and other gaps in the sequence (MSSQL can jump the IDs by 1000; pretty sure other DBs do that under certain situations) make it complicated to try to guess if rows were skipped. Plus the batch inserts might not be a proper batch; they could consist of multiple INSERT statements, meaning each tx might not even have a contiguous block of IDs assigned.

We're currently using MSSQL 2012 if there's a hack specific to that system, but I'm more interested in general.

InnoDB log sequence number is in the future

Posted: 11 Jul 2013 08:31 PM PDT

130711 23:03:40 InnoDB: Error: page 23 log sequence number 3336825872455  InnoDB: is in the future! Current system log sequence number 235089.  InnoDB: Your database may be corrupt or you may have copied the InnoDB  InnoDB: tablespace but not the InnoDB log files.  

I've never seen or had this error before. The client does not have backups so they need a recovery done.

I've done MySAIM recovery before but never InnoDB. Is there any tips on how to do it correctly?

From what I've read up on I need to increase a single counter ("log sequence number") which is stored somewhere in the headers of ib_logfile0 and ib_logfile1 or something like that correct?

Negative number of rows inserted

Posted: 11 Jul 2013 01:59 PM PDT

I just ran a rather simple insert statement (that I think processed alot of rows), and I got the message: "-1,311,750,183 rows inserted". Should I be concerned about the fact that it's negative?

Is it possible to keep track of changes to tables structure?

Posted: 11 Jul 2013 12:50 PM PDT

Is it somehow possible to keep track of changes made to the STRUCTURE (not the data) made to tables?

I mean, i don't want to know when rows are inserted/updated/deleted, what i want is to know when the table definition is been changed (and what those changes were). E.g. to know if/when a column was added, removed, if a column type was changed, if an index was added, etc.

In other words, to keep track of all the "ALTERS" of the tables. (and the state before those alters)

I'd like this to know exactly when some changes were made, and to keep track of discrepancies of data, for example to know when a column was changed to hold VARCHARs instead of INTs (just an idea), or when a new column was added, etc.

So is there a way to do this?

I am using MySQL, version 5.1.66. And, if it is not possible in MySQL, is it in other Databases?

Thanks

Locking and Faulting on import

Posted: 11 Jul 2013 01:20 PM PDT

My collection with a _id and a single field index is run on a single node/on localhost, has 32GB RAM and 16 core CPU available. The hdd is in RAID1, without SSD cache. Mongo version 2.4.

I run a mongoimport with 5Mio documents dump with a total of 5GB with mongoimport. It starts fast and slows down very quickly.

This starts fast with about 11000/second. After 3 minutes going down to 9000/s, again 1 minute later it is 7000/s and after some more minutes I we are in slow motion on 200/s. The mongoimport takes for hours instead of some minutes.

mongostat shows increasing LOCKs and faults:

insert  query update delete getmore command flushes mapped  vsize    res faults  locked db idx miss %     qr|qw   ar|aw  netIn netOut  conn       time   *0     *0     *0     *0       0     1|0       0   912g  1825g    21g      1 tdb:74.0%          0       0|0     0|1     5m     3k    10   03:20:21   ...  79     *0     *0     *0       0     1|0       0   912g  1825g    21g     10 tdb:33.6%          0       0|0     0|0    47k     3k    10   03:22:22   ...  104     *0     *0     *0       0     1|0       0   912g  1825g    21g     13  tdb:7.9%          0       0|1     0|1    60k     3k    10   03:22:33   ...  86     *0     *0     *0       0     1|0       0   912g  1825g    21g     11  tdb:0.8%          0       0|0     0|1    52k     3k    10   03:23:17   ...  128     *0     *0     *0       0     1|0       0   912g  1825g    21g      9 tdb:58.3%          0       0|0     0|1    76k     3k    10   03:25:00   ...  95     *0     *0     *0       0     1|0       0   912g  1825g    21g     11 tdb:39.3%          0       0|0     0|0    58k     3k    10   03:26:07   ...  86     *0     *0     *0       0     1|0       0   912g  1825g    21g     12 tdb:88.8%          0       0|0     0|1    51k     3k    10   03:27:12   

I run mongodb without --upsert and also tried with only the _id index. Furthermore reIndex()did not improve this.

How to improve this and remain the speed of bulk INSERTs constantly?

Whay makes this rare error: "Can't connect to local MySQL server through socket '/tmp/mysql.sock' (146)"

Posted: 11 Jul 2013 02:16 PM PDT

This is a rare error that occurs on Solaris, have seen it from Perl looking like this.

DBI->connect(...) or die "Unable to connect: $DBI::errstr"

Outputs

Unable to connect: Can't connect to local MySQL server through socket '/tmp/mysql.sock' (146) at something.pl line ...

It is very rare for this error to occur.

The global variable max_connections has been ruled out as a problem.

Can you enumerate the possible or likely causes of this error happening only occasionally?

MySQL stored procedure create database by month

Posted: 11 Jul 2013 12:29 PM PDT

I'm a systems admin, and I'm not a dba, I'm wondering if I could use MySQL stored procedure to Query main database and move records by month to another database located on the same system.

This way we can maintain database small main database, and have data moved to individual database by month.

Thank you

optimizing MySQL for traffic analytics system

Posted: 11 Jul 2013 12:18 PM PDT

background :
I've developed a URL shortener system like Bitly with same features , so the system also tracks clickers info and represent as graphs to the person who has shorten the link as analytics data. currently I'm using MySQL and have a table to store click info with this schema:

visit_id (int)  ip (int)  date (datetime)  country  browser  device  os  referrer (varchar)  url_id (int)  //as foreign key to the shortened URL  

and for now , just the url_id field has index

The system should represent click analytics in the time periods the user wants, for example past hour, past 24 hours , the past month , ...

for example to generate graphs for past month , I do following queries:

SELECT all DAY(date) AS period, COUNT( * )                           FROM (                            SELECT *                           FROM visits                          WHERE url_id =  '$url_id'                          ) AS URL                          WHERE DATE > DATE_SUB( CURRENT_TIMESTAMP( ) , INTERVAL 1 MONTH )                           GROUP BY DAY( DATE )    //another query to display clicker browsers in this period  //another query to display clicker countries in this period  // ...  

issues:

  • for a shortened link with about 500,000 clicks , it takes about 3-4 seconds to calculate just the first query , so for total queries about 10-12 seconds which is terrible.
  • lots of memory and CPU is needed to run such queries

questions :
1- how to improve and optimize the structure , so the analytics of high traffic links will be shown in less than 1 second(like bitly and similar web apps) and with less usage of CPU and RAM ? should I make an index on the fields date, country, browser, device, os, referrer ? if yes , how to do that for the field date because I should group clicks some times by DAY(date), sometimes by HOUR(date), sometimes by MINUTE(date) and ...

2- is MySQL suitable for this application? assume at maximum my application should handle 100 million links and 10 billion clicks on them totally. Should I consider switching to an NoSQL solution for example?

3- if MySQL is ok , is my database design and table structure proper and well designed for my application needs? or you have better recommendations and suggestions?

Thanks for your help

How do I migrate varbinary data to Netezza?

Posted: 11 Jul 2013 12:48 PM PDT

I got a warning message while migrating DDL from SQL Server to Netezza:

Warning: [dbo].[spec_binarymessage].[blobdata] data type [varbinary] is not supported the target system and will be scripted as VARCHAR(16000).

I'm wondering whether this kind of data conversion will cause some issues such as truncation of data etc.?

SQL Server not using available memory

Posted: 11 Jul 2013 10:35 AM PDT

SQL 2008R2 Microsoft SQL Server Standard Edition (64-bit)
on Windows 2008R2 Enterprise

Server has over 300 GB of memory but total memory use in control panel never goes over 86 GB
Have SQL configured to use the maximum amount of memory
Even under heavy use - CPU over 80% for minutes

Dedicated SQL Server
Several large databases
One heavily used table has an index size alone of over 10 GB

Set the service account to hold locks in memory

Is that normal?
What can I test?
Can I get SQL to use more memory?

Create Table If Not Exists Hanging

Posted: 11 Jul 2013 12:58 PM PDT

I am still new to using databases, but currently I am using a mysql database and attempting to make my bash script unbreakable because it might be used in several different environments (dev, qa, etc). I know it will run correctly without this one line.

CREATE TABLE IF NOT EXISTS backupfiles (fileName VARCHAR(20), archiveId VARCHAR(500), checkSum VARCHAR(100), glacierVault VARCHAR(100), timeStamp date);  

I also know that if I put that exact line into my database when that table does not exist then it runs perfectly and if it does exist already it gives me a warning, but does not break.

+-------+------+------------------------------------+  | Level | Code | Message                            |  +-------+------+------------------------------------+  | Note  | 1050 | Table 'backupfiles' already exists |  +-------+------+------------------------------------+  

When I run the line above and there is no table named backupfiles it works perfectly, but when there is it hangs(when I say hangs it sits there and does nothing) 95% of the time and 5% of the time works. Has anyone run into a problem like this? I am using AWS RDS(Amazon Web serices Relational Database Service) and the mysql server is 5.5.27

Here is all of my code that I relates to the mysql database

mysql -h portal-rds -u $user --password=$mysqlpw <<QUERY_INPUT  CREATE DATABASE IF NOT EXISTS $DATABASE;  use $DATABASE;  CREATE TABLE IF NOT EXISTS backupfiles (fileName VARCHAR(20), archiveId VARCHAR(500), checkSum VARCHAR(100), glacierVault VARCHAR(100), timeStamp date);  INSERT INTO backupfiles VALUES ('$archive_file_name', '$archiveID', '$CURRENTVAULT', '$checkSum', CURDATE());  COMMIT;  QUERY_INPUT  

SQLCMD Exporting 16.5 Million Rows

Posted: 11 Jul 2013 11:30 AM PDT

I am exporting a table to a TSV file using sqlcmd, and I am running into issues. The table has 16+ million rows and about 55 columns.

The problem is that it does not export the full table, but seems to stop randomly at various points (i am guessing a timeout?) Each time a different number of rows are exported and each time the file is of a slightly different size (indicating that I am not hitting any row or size limit).

I am not using any timeout switch (meaning the default of "as long as it takes" is used).

Here is my command (with most columns removed for simplification and illustration purposes):

sqlcmd -U myUsername -P myPassword -S SERVERNAME -d "DBNAME" -Q "SELECT ROW_KEY, ISNULL(CARRIER_ID, ''),ISNULL(CONVERT(VARCHAR(19),ORDER_DATETIME,120),''),ISNULL(HEIGHT, ''),ISNULL(RESIDENTIAL_CHARGE, '') FROM MYTABLE" -o "OUTPUT_FILE.txt" -h-1 -s"|" -W  

I wonder if it could have something to do with timeouts or the use of ISNULL() on all of the columns (although when i run the query in sql server management studio I get the correct number of rows returned e.g. 16 million + )?

Again, I get about 4-8 million rows each time, but never the full amount. I am in a sql server 2k5 db, and running sqlcmd from a remote machine with sql server 2k8.

Order table Database design

Posted: 11 Jul 2013 08:05 PM PDT

I'm in need of some help with the database design of order table of a shopping cart for online food ordering system. I have a design below. Tables about which I'm Concerned are order_details and order. Basically what is going to happen is user selects foods from restaurant menus and will place order, User details are stored in the user table (which is not shown in the image).

This is my first attempt, I was never good at Database. So please take it easy if you fill that the question is way too low for this site.

And my questions are:

  • Are the tables good enough?
  • Do I need to add any more fields?
  • And one more question I need to store multiple addresses of users(user can have multiple shipping address) how can I handle this?

    http://i.stack.imgur.com/HCYBi.jpg

Unable to generate a MySQL query

Posted: 11 Jul 2013 12:21 PM PDT

I have a table in a MySQL database as below:

ID  Timestamp           State  1   2013-07-10 09:00:00 Good  1   2013-07-10 09:10:00 Good  1   2013-07-10 09:20:00 Good  1   2013-07-10 09:30:00 Bad  1   2013-07-10 09:40:00 Bad  1   2013-07-10 09:50:00 Bad  1   2013-07-10 10:00:00 Good  1   2013-07-10 10:10:00 Good  

I want to generate a report as shown below using the above table.

ID  Start Timestamp         End Timestamp        Duration(in mins)  State  1   2013-07-10 09:00:00 2013-07-10 09:30:00 30          Good  1   2013-07-10 09:30:00 2013-07-10 10:00:00 30          Bad  1   2013-07-10 10:00:00 2013-07-10 10:10:00 10          Good  

The query that I had generated is not able to showcase the switching taking place.

SqlPackage does not pick up variables from profile

Posted: 11 Jul 2013 05:08 PM PDT

I want to upgrade a database using .dacpac and sqlpackage.exe

here is how I run sqlpackage:

SqlPackage.exe      /Action:Publish      /SourceFile:"my.dacpac"      /Profile:"myprofile.publish.xml"  

The error I get is:

* The following SqlCmd variables are not defined in the target scripts: foo.

I have verified that myprofile.publish.xml file does contain that var:

<ItemGroup>    <SqlCmdVariable Include="foo">      <Value>bc\local</Value>    </SqlCmdVariable>  

I also verified that project that creates dacpac does publish successfully from within visual studio using myprofile.publish.xml

What else could I be missing?

(I'm using SQL Server 2012)

Database Mail sending functionality not working on local system

Posted: 11 Jul 2013 12:54 PM PDT

I am using Database Mail functionality to send mail from a SQL Server 2008 database via following stored procedure execution:

EXEC sp_send_dbmail @profile_name = 'MyProfile',                       @recipients = 'abc@companyname.com',                       @subject = 'Test message',                      @body = 'Congrats Database Mail Received By you Successfully.'   

I have tried with my gmail account profile on my local system it's working properly but not with my company or outlook profile.

Error message:

The mail could not be sent to the recipients because of the mail server failure. (Sending Mail using Account 5 . Exception Message: Could not connect to mail server. (A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond abc.j.i.ooo:pp). )

Reference

http://blogs.msdn.com/b/suhde/archive/2009/07/12/how-to-configure-sql-server-database-mail-to-send-email-using-your-windows-live-mail-account-or-your-gmail-account.aspx  

What would be the problem?

Thanks

Is there a standard way to use common data and procedures across multiple databases?

Posted: 11 Jul 2013 01:52 PM PDT

We have several independent databases that have data and code in common, not in the sense that it is accessed between the databases but in the sense that the data means the same thing in each database, and the code does the same thing.

Examples are:

  • Configuration settings (e.g. error codes 50xxx, boilerplate text for company name, etc).
  • Procedures and functions that perform common tasks (e.g. converting a CSV string into a table, logging an error, formatting an error message based on the error code).
  • Table structures (e.g. table for database version history, table for logging errors). As well as the columns, constraints, and triggers there are also common procedures and functions that read/write the data.
  • Look-up tables (e.g. date look-up table containing dates between 2000-2100). These are similar to table structures but quite often the data will be the same across the databases. In the case of the date table, the start and end dates are configuration settings, which are read by a function, then the date data is generated by a procedure. All these operations are common between the databases.
  • User defined types (e.g. types for passing tables to functions).

For maintenance and support reasons I think it makes sense for things like error codes, procedures, functions, and types to have a "single point of truth" across all the databases, not a different truth in each database.

At the moment each database has it's own copy of everything, including source repository, and we maintain them all independently. This is far from ideal because it's too easy to fix a procedure in A and forget to put it in B, or add an error code to A and the same one to B but they mean different things, etc.

The databases are not updated at the same time and they don't necessarily reside on the same hardware. There are cases where they can read data from each other (if the other one exists).

Is there a standard way of having a single point of truth for data/code that is used across more than one database?

createdb: could not connect to database postgres: FATAL: could not write init file

Posted: 11 Jul 2013 07:54 PM PDT

RedHat Enterprise Server 3.0 32 Bits

psql (PostgreSQL) 8.2.3

user: postgres

server is running:

/soft/postgres/8.2.3/bin/pg_ctl start  pg_ctl: another server may be running; trying to start server anyway  2013-05-09 11:23:07 BRST---:FATAL:  lock file "postmaster.pid" already exists  2013-05-09 11:23:07 BRST---:HINT:  Is another postmaster (PID 12810) running in data directory "/opt/psql/dba/bdadms/data1/pstg"?  pg_ctl: could not start server  Examine the log output.  

I had just created a new database cluster with initdb; but when I run createdb:

8.2.3:postgres:pstg:>/soft/postgres/8.2.3/bin/createdb pstg  createdb: could not connect to database postgres: FATAL:  could not write init file  8.2.3:postgres:pstg:>/soft/postgres/8.2.3/bin/createdb postgres  createdb: could not connect to database template1: FATAL:  could not write init file  8.2.3:postgres:pstg:>/soft/postgres/8.2.3/bin/createdb template1  createdb: could not connect to database postgres: FATAL:  could not write init file  

any clues as to the cause and possible solutions to this problem?

How to load the Adventureworks database into Oracle?

Posted: 11 Jul 2013 01:09 PM PDT

I am trying to work through some of the examples at sqlzoo and I would like to load the example adventureworks example database into an Oracle database at home. Google search has not turned up any sql scripts that have been converted to Oracle. Does anyone have or know where I can get such scripts.

Repeated values in group_concat

Posted: 11 Jul 2013 03:54 PM PDT

I have two tables, first the table food and Second is Activity:

INSERT INTO food      (`id`, `foodName`)  VALUES      (1, 'food1'),      (2, 'food2'),      (3, 'food3'),      (4, 'food4'),      (5, 'food5'),      (6, 'food6'),  ;  CREATE TABLE Activity      (`id` int,`place` varchar(14),`food_id` int,`timing` TIME,`date_and_time` DATETIME)  ;  INSERT INTO Activity      (`id`,`place`, `food_id`,`timing`,`date_and_time`)  VALUES      (1, 'place1', 1, '10:30am','2013-05-01'),      (2, 'place1', 1, '12:30pm','2013-05-01'),      (3, 'place1', 1, '04:30pm','2013-05-01'),      (4, 'place2', 2, '10:30am','2013-05-02'),      (5, 'place2', 2, '12:30pm','2013-05-02'),      (6, 'place2', 2, '4:30pm','2013-05-02'),      (7, 'place1', 2, '10:30am','2013-05-02'),      (8, 'place1', 2, '12:30pm','2013-05-02'),      (9, 'place1', 2, '4:30pm','2013-05-02'),      (10, 'place2', 3, '10:30am','2013-05-03'),      (11, 'place2', 3, '12:30pm','2013-05-03'),      (12, 'place2', 3, '4:30pm','2013-05-03')  ;  

For now I'm using the following query:

SELECT       a.activity_type AS Activity,       COUNT(DISTINCT p.id) AS Products,      CONVERT(GROUP_CONCAT(p.category_id SEPARATOR ',  ') USING utf8)         AS Categories  FROM       food AS p    JOIN       ( SELECT activity_type             , prod_id        FROM activity         WHERE activity_type <> ''         GROUP BY activity_type               , prod_id      ) AS a      ON p.id = a.prod_id  GROUP BY       activity_type  ORDER BY       Products DESC ;  

Could you please help me, I need output in the below format:

place | food_id | Timings             |                              |        |         |---------------------|Date                          |        |         |Time1 |Time2 | Time3 |                              |  ---------------+----------+------------------------------------------|  place1 | 1      | 10:30am| 12:30pm| 4:30pm |2013-05-01(MAX timestamp)|            place2 | 1      | 10:30am| 12:30am| 4:30am |2013-05-01(MAX timestamp)|  

MySQL PDO Cannot assign requested address

Posted: 11 Jul 2013 01:54 PM PDT

Can someone help me with this error?

[08-Apr-2013 17:44:08 Europe/Berlin] PHP Warning:  PDO::__construct(): [2002]      Cannot assign requested address (trying to connect via tcp://****:3306) in       /var/www/***  [08-Apr-2013 17:44:08 Europe/Berlin] PHP Fatal error:  Uncaught exception       'PDOException' with message 'SQLSTATE[HY000] [2002] Cannot assign requested       address' in /var/www/***  

I have a Server with a lot connections per second; out of about 100 Connections, a single one got this error.

I've tried this recommendation from stackoverflow however it does not solve my problem.

Performing SELECT on EACH ROW in CTE or Nested QUERY?

Posted: 11 Jul 2013 06:54 PM PDT

This is a problem in PostgreSQL

I have a table which stores the tree of users;

      +------+---------+      |  id  | parent  |      |------+---------|      |  1   |   0     |      |------|---------|      |  2   |   1     |      |------|---------|      |  3   |   1     |      |------|---------|      |  4   |   2     |      |------|---------|      |  5   |   2     |      |------|---------|      |  6   |   4     |      |------|---------|      |  7   |   6     |      |------|---------|      |  8   |   6     |      +------+---------+  

I can query a complete tree from any node by using the connectby function, and I can separately query the size of tree in terms of total nodes in it, for example

tree for #1 has size 7
tree for #5 has size 0
tree for #6 has size 2, and so on

Now I want to do something like Selecting all possible trees from this table (which is again carried out by connectby), count the size of it and create another dataset with records of ID and size of underlying tree, like this:

      +------------------+-------------+      |  tree_root_node  |  tree_size  |      |------------------+-------------|      |      1           |     7       |      |------------------+-------------|      |      2           |     3       |      |------------------+-------------|      |      3           |     0       |      |------------------+-------------|      |      4           |     3       |      |------------------+-------------|      |      5           |     0       |      |------------------+-------------|      |      6           |     2       |      |------------------+-------------|      |      7           |     0       |      |------------------+-------------|      |      8           |     0       |      +------------------+-------------+  

The problem is, I am unable to perform the same SELECT statement for every available row in original table in order to fetch the tree and calculate the size, and even if I could, I dont know how to create a separate dataset using the fetched and calculated data.

I am not sure if this could be simple use of some functions available in Postgres or I'd have to write a function for it or simply I dont know what exactly is this kind of query is called but googling for hours and searching for another hour over here at dba.stackexchange returned nothing.

Can someone please point to right direction ?

"Could not find stored procedure' even though the stored procedure have been created in MS SQL Server Management Studio

Posted: 11 Jul 2013 08:21 PM PDT

I have created a table testtable inside the database testbase that have the following structure:

product_no (int, not null)  product_name (varchar(30), not null)  price (money, null)  expire_date (date, null)  expire_time (time(7), null)  

which I used the Microsoft SQL Server 2008 Management Studio.

I created a stored procedure testtable_pricesmaller as follows

use testbase  go  create procedure testtable_pricesmaller      @pricelimit money  as  select * from testtable where price = @pricelimit;  go  

and are able to view the Stored Procedures on the Object Explorer of the Microsoft SQL Server Management Studio. (It is listed in the following tree structure of the Object Explorer)

Databases      + testbase          + Tables              + dbo.testtable          + Programmability              + Stored Procedures                  + dbo.testtable_pricesmaller  

I find it very strange when I receive the following error:

Could not find the stored procedure 'dbo.testtable_pricesmaller'.  

when I execute the following SQL statement:

execute dbo.testtable_pricesmaller 50  

What could it be missing?

Stored Procedures Best Practices: Fenced or Unfenced?

Posted: 11 Jul 2013 11:58 AM PDT

I believe I understand the reasons behind fenced and unfenced stored procedures.

Fenced run "outside" of the database (in our case DB2) so as to prevent possible corruption of the database engine should there be issues with things like pointers.

Unfenced runs "inside" of the database, which means that performance is better.

From what I have also researched, SQL PL is always basically unfenced, because it is SQL and therefore cannot access memory like programming languages can.

C/C++ and Java procedures can run fenced or unfenced. But since they can possibly access memory, there should be a consideration for running them fenced, unless there is a certainty on the quality of the code to not crash and it needs performance.

First of all, am I correct in my understand of the above?

Next, is it generally a best practice to start out with all stored procedures (even those defined as SQL PL) as fenced first?

Any other best practices for stored procedures, especially as related to fencing and/or security?

EDIT: Further research has shown that SQL PL procedures cannot run fenced. Since they do not contain any code that could harm the database engine such as pointers or file I/O, DB2 knows they are safe and runs them inside the engine (ie, unfenced). That being said, I am still looking for best practices regarding all other stored procedures.

No comments:

Post a Comment

Search This Blog