Thursday, July 18, 2013

[how to] Query (duration) different between data value change

[how to] Query (duration) different between data value change


Query (duration) different between data value change

Posted: 18 Jul 2013 06:41 PM PDT

I am trying to write a query that would get me the duration of a status in a table. This query needs to work in SQL 2008.

Say I have the following table:

Key Value   RecordDate  1   1   2012-01-01  2   1   2012-01-02  3   1   2012-01-03  4   5   2012-01-05  5   5   2012-01-05 12:00:00  6   12  2012-01-06  7   1   2012-01-07  8   1   2012-01-08  

I would like to get the following result

Value StartDate   EndDate     Duration  1     2012-01-01  2012-01-05  4 days  5     2012-01-05  2012-01-06  1 days  12    2012-01-06  2012-01-07  1 days  1     2012-01-07  NULL        NULL  

Basically I would like the get the duration when of the value before it changes.

I am getting somewhere close, but still can't figure it out:

SELECT [Key], [Value],           MIN(RecordDate) OVER(PARTITION BY [Value]) as 'StarDate',         MAX(RecordDate) OVER(PARTITION BY [Value]) as 'EndDate',         DATEDIFF(day, (MIN(RecordDate) OVER(PARTITION BY [Value])), (MAX(RecordDate) OVER(PARTITION BY [Value])))  FROM [RateTable]  Order by RecordDate  

I know that SQL2012 has LAG and LEAD function, but since I am deal with SQL2008, I can't use it.

Please advise

Here is the sql statement that generate the sample data

CREATE TABLE RateTable(      [Key] [int] IDENTITY(1,1) NOT NULL,      [Value] [int] NULL,      [RecordDate] [Date] NULL      )  GO    INSERT INTO [RateTable] VALUES (1, '2012-01-01');  INSERT INTO [RateTable] VALUES (1, '2012-01-02');  INSERT INTO [RateTable] VALUES (1, '2012-01-03');  INSERT INTO [RateTable] VALUES (5, '2012-01-04');  INSERT INTO [RateTable] VALUES (5, '2012-01-05 12:00:00');  INSERT INTO [RateTable] VALUES (12, '2012-01-06');  INSERT INTO [RateTable] VALUES (1, '2012-01-07');  INSERT INTO [RateTable] VALUES (1, '2012-01-08');  GO  

Could not allocate a new page for database because of insufficient disk space in filegroup PRIMARY

Posted: 18 Jul 2013 08:17 PM PDT

*Error 0xc0202009: Data Flow Task 1: SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80004005. \"Could not allocate a new page for database 'database' because of insufficient disk space in filegroup 'PRIMARY'. Create the necessary space by dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.". (SQL Server Import and Export Wizard)*

I'm trying to upload a series of 2GB files and while loading the 4th file, I continue to get the above error message even after I go to Database properties > Files and change the Filegroup's autogrowth to 2500 MB and maxsize unlimited.

What is the best workaround? This data will ultimately take up around 60-80GB.

Which is the uniqueness level of object_id?

Posted: 18 Jul 2013 07:48 PM PDT

Which is the uniqueness level of object_id in SQL Server? I mean, is this value unique per database, per instance, per server,... Also, what happens when I restore the database in another server or instance? And what happens with the object_id if I create the database from a script instead of a backup?

Should I refresh query plan cache

Posted: 18 Jul 2013 02:27 PM PDT

Please let me explain my problem and situation:

I have a web application - MVC3, MSSQL Server 2005, LinqToSQL. It has been running great until one fine morning I pushed a lot of rows to a table that is heavily used and since then I was getting query timeouts. In order to fix the problem I run the Database Tuning Advisor and I added some Indexes and Statistics. I also created a maintenance plan to rebuild indexes daily. After those additions, the application has been behaving unstable; it would work fast for couple of hours then it would start timing out again. Next, life forced me to clean up the table in matter, and the amount of rows in it is even smaller now than it was before but the timeouts are still happening. So, I removed all indexes that I created and now the website is much more stable but from time to time I still see some timeouts.

I've been trying to figure out how to fix those queries and when I profile it and paste the query directly into the SQL Management Studio it returns the results in 1 second, but when I run this query from my application, it's about 25 seconds. Then after it runs for the first time, next time it goes as fast as on the server!

I started doing some research and it looks like when I played with all those indexes my query plans got messed up and now they are creating issues.

My questions are :

  1. Should I refresh my query plan cache (around 22000 queries - a lot of them has been used only once) and
  2. If I do it, what would the impact be on the SQL while they are all rebuilding?

Listing databases that are not in use

Posted: 18 Jul 2013 04:19 PM PDT

I want to show databases that are not used. For that I tried this query, but the result set is empty:

mysql> SELECT SCHEMA_NAME FROM INFORMATION_SCHEMA.SCHEMATA where SCHEMA_NAME NOT          IN (SELECT db FROM INFORMATION_SCHEMA.PROCESSLIST);       Empty set (0.70 sec)  

I use MySQL 5.5.24.

Any suggestions?

Using an index for both uniqueness and fast lookup?

Posted: 18 Jul 2013 04:49 PM PDT

Right now I have a postgresql 8.4 database set up for players in a multiplayer game. I want the username column to be unique. I also want to be able to lookup a player by username quickly. Here's the output of \d players:

                            Table "public.players"     Column   |  Type   |                      Modifiers                         ------------+---------+------------------------------------------------------   id         | bigint  | not null default nextval('players_id_seq'::regclass)   username   | text    | not null   trophies   | integer |   Indexes:      "pk_id" PRIMARY KEY, btree (id)      "players_username_key" UNIQUE, btree (username)      "players_trophies_index" btree (trophies DESC NULLS LAST)      "players_username_index" btree (username)  

I'm not a DBA, so bear with me. This seems like an inefficient use of disk space by having two indices on the username column: one for uniqueness and one for fast lookup. Is it possible to combine them into one index, maintaining uniqueness and fast lookup? If so, are there any drawbacks to such an approach?

MySQLism, or SQL Standard

Posted: 18 Jul 2013 12:28 PM PDT

Just came across a weird issue, whereby depending on how I create an index, and index name is required.

http://dev.mysql.com/doc/refman/5.5/en/create-index.html http://dev.mysql.com/doc/refman/5.5/en/alter-table.html

CREATE INDEX `random_name` ON `my_table` (`my_column`); # Requires an index name    ALTER TABLE `my_table` ADD INDEX (`my_column`); # Does not require an index name  

It seems to me that the CREATE INDEX call, should not make index name required. I'm wondering if this is a MySQLism, or a SQL standard?

How to Create Accounting Statement [on hold]

Posted: 18 Jul 2013 11:15 AM PDT

I have data like this:

Column1         Column2  ==============  =======  sales              5000  purchase           3000  salesreturn         100  Purchasereturn      100  Expense            1000   

I want output like this:

Expense        1000             sales         5000  Purchase       3000             salesreturn    100(-)  PurchaseReturn  100(-)                       ----------                -------                          4900                 3900  

I am using Crystal Reports to display this output.

Extended event (sp_statement_completed) firing multiple times

Posted: 18 Jul 2013 01:41 PM PDT

I'm using SQL 2008 R2 enterprise and this is my first foray into extended events. It's pretty cool stuff. I'm wanting to create a session that will run basically all the time to audit query times on our SQL server. I will then be able to pipe that data into a product called Splunk and correlate that data with all sorts of other metrics.

Anyway, I'm having some odd things happen (odd to me ... I'm sure there is a reason) with the sp_statement_completed event. The .xel file is growing SUPER fast ... like 1GB in a minute fast. Looking at the data I see the same sql statements duplicated sometimes 75-100 times. What can I do to deduplicate/filter this data prior to going into the file. Many of them have a cpu of zero and duration of zero. I think I might even be seeing a little duplication using sql_statement_completed. I've also noticed that I do see stored procedure end times using the sql_statement_completed event. So, is there even a need to use sp_statement_completed?

Any thoughts would be most welcomed and thanks in advance.

SSIS hangs during For Each File enumeration

Posted: 18 Jul 2013 11:17 AM PDT

While debugging an SSIS package that loops over thousands of files, execution hangs with no errors and no failed tasks. Slightly simplified, the package does this:

For Each File                       (a directory full of pdf files, each ranging from 800 -                                          2000 pages and 3 - 8MB filesize)      *   Script Task                 (get file information)      *   File System Task            (clear out temp folder)      *   Execute Process Task        (use pdftk to break file into individual pages)      *   For Each File               (each page in temp folder, 40 - 100KB each)          *   Script Task             (parse pdf file)          *   Execute Process Task    (pdftk to concatenate like pages)          *   Execute SQL Task        (insert one row into staging table using OPENROWSET)          *   File System Task        (delete inserted file)          *   Script Task             (set variables for next iteration)      *   Execute SQL Task            (execute proc to merge staging into prod)      *   File System Task            (rename completed file)  

When I execute the package (debug mode in BIDS), it runs as expected through the first file and well into the second, but eventually hangs between iterations of the inner loop. When it hangs, all tasks are green, and the two loop containers are yellow. The output window does generate any messages, and going through the process tab, each task has a matching number of starts and stops except for the two loops and the package itself.

My first thought was that I'd exceeded an iteration limit for the For Each File enumerator, but I cannot find any documentation for such a limit. What else can I check to determine why SSIS stops running?

How to do incremental/differential backup every hour in Postgres 9.1?

Posted: 18 Jul 2013 11:42 AM PDT

Trying to do an hourly hot incremental backup of a single postgres server.

I have the following setup in postgresql.conf:

max_wal_senders=2  wal_level=archive  archive_mode=on  archive_command='copy "%p" "c:\\postgres\\archive\\%f"'  

I did a base backup with pg_basebackup -U postgres -D ..\archive -Ft -l 2013-07-07 -x

Which made a big base.tar file in the archive folder and added some long file name files, which I assume are the WALs.

pg_start_backup('label') and pg_stop_backup() seem to create the WAL files in xp_log, and then copy them to the archive folder.

  1. what command(s) do I run to do a new incremental backup (pg_basebackup does a new base backup which I don't want right now)? do I just run select pg_start_backup('label'); select pg_stop_backup(); on a schedule?

  2. What does the label in pg_basebackup and pg_start_backup() do exactly?

  3. WAL Files don't seem to be removed from pg_xlog. What should I do about that? It seems to keep 5 or so WALs in pg_xlog. Is that to be expected?

  4. Do I need to backup the .backup files in the archive folder or just the 16,384KB WAL files?

  5. should I use the --xlog parameter and if so do I need to change wal_keep_segments from 0?

How to determine Oracle LOB storage footprint?

Posted: 18 Jul 2013 11:23 AM PDT

With SECUREFILE storage I can specify whether I want compression (and the level of it) and deduplication, and it's all a trade-off between time and space.

Timing is fairly easy to profile but what's the easiest way to get a reasonably accurate measurement of how much space a specific LOB column takes up?

Create a trigger to update table data on another Server's database

Posted: 18 Jul 2013 07:27 PM PDT

I am creating a trigger in MySQL and I need a little help.

I have 2 websites, 2 databases (same name) on 2 different web servers, S1 & S2.

These databases have the same tables names.

I want both the user data on both the websites to be the same.

So if one user registers on S1, then that user registration information should be passed to S2.

If a user registration information is updated on S1, the same information should be updated on S2.

And the same applies for S2.

How can I create a trigger so that every time there is an insert / update / delete in database on S1, then the user table on S2 also gets automatically updated.

And every time there is an insert / update / delete in database on S2, then the user table on S1 also get automatically updated.

Is this possible? Could you provide some examples?

SSRS Reports on WUXGA Displays

Posted: 18 Jul 2013 12:10 PM PDT

We use WUXGA displays (1920 x 1200), and have lots of SSRS reports. Since these reports need to be printed once in a while the layout is adjusted to A4. However when the users look at the reports on the display around 40% of the screen is blank. Absolut waste of screen space. How can I adjust SSRS to use the full screen size for reports ?

Tool to export data with all relational data?

Posted: 18 Jul 2013 05:27 PM PDT

Is there a tool to export data from selected rows in a table with all data stored in other tables in other tables linked by relational design?

The purpose is to ease migrations of bits of data between servers for adhoc migrations. I am looking specifically for a tool for MySQL InnoDB with defined foreign keys.

mysql: need help to optimize my query/table

Posted: 18 Jul 2013 10:27 AM PDT

I'm wondering if someone could help me optimize my tables/query to speed up a query. It is currently running ridiculously slow. I think a well-thought out index could help me. Any help would be really appreciated

Tables URLS and TAGS mentioned below are 2 and 20 million rows respectively (will probably end up having 10x). A query like the one below already takes 10 seconds to run.

An Example: http://whatrethebest.com/php+tutorials

Tables

CREATE TABLE IF NOT EXISTS `TAGS` (  `hash` varchar(255) NOT NULL,  `tag` varchar(255) NOT NULL,  UNIQUE KEY `my_unique_key` (`hash`,`tag`),  KEY `tag` (`tag`)  ) ENGINE=InnoDB DEFAULT CHARSET=utf8;  

and

CREATE TABLE IF NOT EXISTS `URLS` (  `url` text NOT NULL,  `domain` text,  `title` text NOT NULL,  `description` text,  `numsaves` int(11) NOT NULL,  `firstsaved` varchar(256) DEFAULT NULL,  `md5` varchar(255) NOT NULL DEFAULT '',  PRIMARY KEY (`md5`),  UNIQUE KEY `md5` (`md5`),  KEY `numsaves` (`numsaves`)  ) ENGINE=InnoDB DEFAULT CHARSET=utf8;  

QUERY

SELECT urls.md5, urls.url, urls.title, urls.numsaves  FROM urls  JOIN tags ON urls.md5 = tags.hash  WHERE tags.tag  IN (  'php', 'tutorials'  )  GROUP BY urls.md5  HAVING COUNT( * ) =2  ORDER BY urls.numsaves DESC  LIMIT 20  

EXPLAIN

I'm not sure what this shows

id  select_type     table   type    possible_keys   key     key_len     ref     rows    Extra  1   SIMPLE  tags    range   my_unique_key,tag   tag     767     NULL    230946  Using where; Using index; Using temporary; Using filesort  1   SIMPLE  urls    eq_ref  PRIMARY,md5     PRIMARY     767     jcooper_whatrethebest_urls.tags.hash    1     

So I think the problem is:

certain tags like 'php have 34,000 entries, most of which only have under 5 saves. But in order to get the 20 most saved it is having to sort them all.Right?

I can't really create a 'numsaves' column in TAGS and index on that because that number will be changing up and down, and that wouldnt make sense. Is it possible to create a cross-table index between urls.numsaves and tags.tag? Or a third table to use in my query somehow? Would this solve my problem? I know almost nothing about indexing.

Any help would be really appreciated!


EDITS BELOW

RESPONSE TO YperCube:

Thank you, Thank you, your suggestions have sped up my queries by a factor of 10-20X . This is an immense improvement. I can't thank you enough.

I'm posting my current queries and tables with execution times in case you or anyone else has any more optimization suggestions. I am worried that as my table grows I may not be able to keep my search times under 3 seconds, which would be a killer.

New Query Example 1

SELECT u.id, u.url, u.title, u.numsaves  FROM urls AS u  JOIN tags AS t1 ON t1.url_id = u.id  AND t1.tag = 'programming'  JOIN tags AS t2 ON t2.url_id = u.id  AND t2.tag = 'language'  ORDER BY u.numsaves DESC  LIMIT 20     Showing rows 20 - 19 ( 20 total, Query took 0.2009 sec)     id  select_type     table   type    possible_keys   key     key_len     ref     rows    Extra  1   SIMPLE  t2  ref     tag_id  tag_id  767     const   53820   Using where; Using index; Using temporary; Using filesort  1   SIMPLE  t1  ref     tag_id  tag_id  772     const,jcooper_whatrethebest_urls.t2.url_id  1   Using where; Using index  1   SIMPLE  u   eq_ref  PRIMARY,id_numsaves_IX  PRIMARY     4   jcooper_whatrethebest_urls.t2.url_id    1     

Neq Query Example 2 (seems to be slower)

SELECT u.id, u.url, u.title, u.numsaves  FROM urls AS u  JOIN   ( SELECT ui.id, ui.numsaves  FROM urls AS ui  JOIN tags AS t1 ON  t1.url_id = ui.id  AND t1.tag = 'programming'  JOIN tags AS t2 ON  t2.url_id = ui.id  AND t2.tag = 'language'  ORDER BY ui.numsaves DESC  LIMIT 20  ) AS ulim ON ulim.id = u.id  ORDER BY ulim.numsaves DESC ;    Showing rows 0 - 29 ( 2,794,577 total, Query took 0.4633 sec)    id  select_type     table   type    possible_keys   key     key_len     ref     rows    Extra  1   PRIMARY     <derived2>  ALL     NULL    NULL    NULL    NULL    20  Using filesort  1   PRIMARY     u   eq_ref  PRIMARY,id_numsaves_IX  PRIMARY     4   ulim.id     1     2   DERIVED     t2  ref     tag_id  tag_id  767         53820   Using where; Using index; Using temporary; Using filesort  2   DERIVED     t1  ref     tag_id  tag_id  772     jcooper_whatrethebest_urls.t2.url_id    1   Using where; Using index  2   DERIVED     ui  eq_ref  PRIMARY,id_numsaves_IX  PRIMARY     4   jcooper_whatrethebest_urls.t2.url_id    1     

Using Query Example on a Single Tag (slower by a lot)

SELECT u.id, u.url, u.title, u.numsaves  FROM urls AS u  JOIN tags AS t1 ON t1.url_id = u.id  AND t1.tag = 'programming'  ORDER BY u.numsaves DESC  LIMIT 20     Showing rows 20 - 19 ( 20 total, Query took 3.7395 sec)    id  select_type     table   type    possible_keys   key     key_len     ref     rows    Extra  1   SIMPLE  t1  ref     tag_id  tag_id  767     const   200576  Using where; Using index; Using temporary; Using filesort  1   SIMPLE  u   eq_ref  PRIMARY,id_numsaves_IX  PRIMARY     4   jcooper_whatrethebest_urls.t1.url_id    1     

I'm not sure why this one is so much slower?

Do you have any ideas of a query to optimize for querying a single tag?

My Current Tables

CREATE TABLE `urls` (  `id` int(11) NOT NULL AUTO_INCREMENT,  `url` text NOT NULL,  `domain` text,  `title` text NOT NULL,  `description` text,  `numsaves` int(11) NOT NULL,  `firstsaved` varchar(256) DEFAULT NULL,  `md5` varchar(255) NOT NULL DEFAULT '',  PRIMARY KEY (`id`),  UNIQUE KEY `md5` (`md5`),  KEY `id_numsaves_IX` (`id`,`numsaves`)  ) ENGINE=InnoDB AUTO_INCREMENT=2958560 DEFAULT CHARSET=utf8    CREATE TABLE `tags` (  `url_id` int(11) DEFAULT NULL,  `hash` varchar(255) NOT NULL,  `tag` varchar(255) NOT NULL,  UNIQUE KEY `tag_id` (`tag`,`url_id`)  ) ENGINE=InnoDB DEFAULT CHARSET=utf8  

Thank you again

Replication on MySQL server

Posted: 18 Jul 2013 03:27 PM PDT

I had to stop the slave server to test something. After I started the server again there is a problem with replication on MySQL server

On the problematic server

mysql> SHOW SLAVE STATUS\G  *************************** 1. row ***************************                 Slave_IO_State: Connecting to master                    Master_Host: servera                    Master_User: replica                    Master_Port: 3306                  Connect_Retry: 60                Master_Log_File: servera-bin.000024            Read_Master_Log_Pos: 808459481                 Relay_Log_File: serverb-relay-bin.000071                  Relay_Log_Pos: 4          Relay_Master_Log_File: servera-bin.000024               Slave_IO_Running: No              Slave_SQL_Running: Yes                Replicate_Do_DB:            Replicate_Ignore_DB:             Replicate_Do_Table:         Replicate_Ignore_Table:        Replicate_Wild_Do_Table:    Replicate_Wild_Ignore_Table:                     Last_Errno: 0                     Last_Error:                   Skip_Counter: 0            Exec_Master_Log_Pos: 808459481                Relay_Log_Space: 106                Until_Condition: None                 Until_Log_File:                  Until_Log_Pos: 0             Master_SSL_Allowed: No             Master_SSL_CA_File:             Master_SSL_CA_Path:                Master_SSL_Cert:              Master_SSL_Cipher:                 Master_SSL_Key:          Seconds_Behind_Master: NULL  Master_SSL_Verify_Server_Cert: No                  Last_IO_Errno: 1129                  Last_IO_Error: error connecting to master 'replica@servera:3306' - retry-time: 60  retries: 86400                 Last_SQL_Errno: 0                 Last_SQL_Error:  

on the problematic server:

SELECT user, host FROM mysql.user WHERE Repl_slave_priv = 'Y';  +---------+-----------+  | user    | host      |  +---------+-----------+  | root    | localhost |  | root    | serverb   |  | root    | 127.0.0.1 |  | replica | servera   |  | replica | serverb   |  +---------+-----------+  

on the main server:

SELECT user, host FROM mysql.user WHERE Repl_slave_priv = 'Y';  +---------+-----------+  | user    | host      |  +---------+-----------+  | root    | localhost |  | root    | servera   |  | root    | 127.0.0.1 |  | replica | servera   |  | replica | serverb   |  +---------+-----------+  

according to what I've read, there is a need to execute the following command om the main server:

mysql> FLUSH HOSTS;   

What will happen then? if there is any application connected to it - will it disconnect it also?

Merge Replication identity field issues

Posted: 18 Jul 2013 02:27 PM PDT

One of our clients is using our software with merge replication of a database on a SQL Server 2008 R2 machine. There are two production environments in separate geographical locations only one of which is live at any one time, so basically one and live one on standby. Only the live database is updated by teh applications. Every couple of months they failover between the datacentres and the standby environment become the live centre. There is an instance of SQL Server 2008 in each datacentre and merge replication is used to keep them in sync. This was all working ok until the beginning of the year when we started getting replication errors with some lTID columns in various tables that have the Identity property set.

The errors were like this one:

The insert failed. It conflicted with an identity range check constraint in database 'GateMain', replicated table 'dbo.tGateCalcsLog', column 'lTID'. If the identity column is automatically managed by replication, update the range as follows: for the Publisher, execute sp_adjustpublisheridentityrange; for the Subscriber, run the Distribution Agent or the Merge Agent.

Then after the last failover we noticed we had an issue with the lTID values in one specific table. Our application relies on the lTID value always having incremented in order such that the highest lTID value is always the newest entry in the table. We've found that due to how the identity ranges are being managed by replication that when the system is failed over that the lTID range of the now live database server may have a range of values that are lower than those already present in the table. Is there a way to manage this in merge replication so we can guarantee that the next identity value allocated to the lTID column in greater than any lTID currently in the table? Or do we need to use a different type of replication or possibly mirroring?

MySQL Slaves lag behind master

Posted: 18 Jul 2013 04:27 PM PDT

I have one master and four slaves. Sometimes all my slaves lag behind the master. I have implemented the heartbeat for monitoring replication lag. Now I am trying to find why the slaves are lagging behind the master.

I saw the slow queries (for today) on the master and I found that the slowest query (DML) was taking 138 seconds. But the slaves were lagging about 1400 seconds and there were also no slow queries on the slaves for DML (update, delete, insert, etc.).

Points to be taken into consideration:

  1. All tables are InnoDB.
  2. 68 GB of RAM (Master as well as slaves).
  3. Data size about 1 TB.
  4. Master and slave are running from a long.

What may be the reason for lag?

MySQL replication between VPS and shared host

Posted: 18 Jul 2013 06:27 PM PDT

I have a VPS where I have ssh access and shared host with CPanel. Both have MySQL.

I need to create a master/slave setup among the two server. As per this tutorial: How To Set Up Database Replication In MySQL, I will need to modify the my.conf file.

Is it possible to achieve this with the following permissions on both the servers?

  • VPS

    I have root access with ssh login where I can modify /etc/my.conf file.

  • Shared host

    Cpanel with phpmyadmin.

How to store massive quantities of machine generated data?

Posted: 18 Jul 2013 10:08 AM PDT

I am tasked with developing a system that stores massive quantities of machine generated data (syslogs) from a number of server farms and I am wondering what tools you fine folks use such instances. Our project will need to keep up to 10 TB of data (maybe more in the future) and serve queries fairly quickly. I have been reading about columnular rdms' as well as nosql options, which all seem promising.

What GUI tools can be used for PostgreSQL in Mac OS X?

Posted: 18 Jul 2013 10:06 AM PDT

I have recently started using PostgreSQL with my rails app.

I was looking for a SQL GUI to the PostgreSQL database (for Mac).

Anything better than pgadmin3?

What are the challenges of administrating an International stock trading database?

Posted: 18 Jul 2013 09:01 PM PDT

I am a student, and have a question for an assignment:

What problems and challenges do database administrators face when designing systems to trade international stocks in real time and 24 hours a day, 6 days a week? Are cloud based database systems as a service a viable option?

How do I change the DEFINER of a VIEW in Mysql?

Posted: 18 Jul 2013 11:27 AM PDT

When I run mysqldump, I get an error:

mysqldump: Got error: 1449: The user specified as a definer ('root'@'foobar') does not exist when using LOCK TABLES  

This makes sense because foobar is a legacy machine that no longer exists.

How do I change the definer of all my tables to 'root'@'localhost'?

Dynamic temporary index creation

Posted: 18 Jul 2013 09:40 AM PDT

In my undergrad databases course, my professor mentioned that some DBMS software is so advanced that it can detect when a long-running query would benefit from an index. The DBMS can then create that index for the duration of the query to increase performance.

However, I don't think that's possible, and some initial Googling seems to agree with me. Does anyone know of a DBMS that actually implements this strategy?

Concurrent Transactions vs Simultaneous Transactions

Posted: 18 Jul 2013 09:41 AM PDT

Chapter 6 of David Kroenke's Database Concepts: Database Administration

Is my understanding correct that in a database, simultaneous transactions don't really happen. Instead, it is a concurrent transaction that is happening fast that users think it is a simultaneous transaction?

The reason I ask is because I was thrown off by the question on my assignment asking how many CPUs are required to perform a simultaneous transaction. I answered only one cpu is required since it is really a concurrent transaction that is happening and not a simultaneous one?

Thoughts, anyone?

EDIT: ~ jcolebrand --

@gbn and @Mark,

I appreciate your input, however, I absolutely have no idea about what those terms are because this course I am currently taking is an introductory course to database management and my question was only discussed in the book in 1 paragraph and did not elaborate any further. I might be able to understand your responses in time, but now, i cannot just yet. Thanks though!

Calculate months between two dates

Posted: 18 Jul 2013 09:42 AM PDT

For a University course I've been given the following task

Suppose that we have an employee table like (first_name, last_name, salary, hire_date, ...). There is regulation which states that every employee each year after he/she is hired, must renew his contract with company. Show how many months remain before the next renewal for each employee.

This is my attempt:

select (abs(months_between(sysdate,hire_date)/12 -            round((months_between(sysdate,e.hire_date))/12)))  from employees e  

Are there any good and free tools for managing a PostgreSQL database?

Posted: 18 Jul 2013 10:59 AM PDT

Before, I have used phpmyadmin to manage an MySQL database, but now I would like to manage a PostgreSQL database. The PostgreSQL database is on a server, but not a webserver, so I don't use PHP.

Are there any good and free tools for managing a PostgreSQL database?

I have tried with pgAdmin III but it was a far from intuitive application to use, compared to phpmyadmin that I have used before. What are PostgreSQL DBAs usually using? do they use graphical tools like pgAdmin III or is it mostly command line tools?

No comments:

Post a Comment

Search This Blog