Friday, May 17, 2013

[how to] How do I create rows of test data in each table?

[how to] How do I create rows of test data in each table?


How do I create rows of test data in each table?

Posted: 17 May 2013 09:28 PM PDT

I am currently working on an assignment. Part of the assignment states "create a minimum of 15 rows of test data in each table and create at least 3 queries that joins two tables and returnes values from both tables.

How do I do this? Any suggestions is much appreciated.

Thanks M

Database structure for tags?

Posted: 17 May 2013 04:46 PM PDT

I posted a thread on Stack Overflow and one person pointed out that my database structure wasn't the best so now I'm here to ask how to do this in a good and proper way.

Currently I have one table called "users" and in the table I have a lot of userinfo. In "users" I have a column called "tags" were information inside the column look like "#dj #personal #coding" and things like that for every user. Is there any better way to store tags describing the user? I have one column called "tags" were the user have stored tags describing him/her and also a column called "interests" were tags are stored to remember what the user is looking for...

Basically, is there any better way to store these tags for each user?

Percona Xtradb cluster only replicating DDL statements

Posted: 17 May 2013 07:00 PM PDT

I'm having a hard time with percona xtradb cluster. The cluster has 3 nodes, and is up and running (all tip top according to wsrep status indicators).

It did the SST perfectly fine from one node to the other 2, but every time I do an update in a table, the write is not propagated to the other nodes, even though ddl statements such as table creation are!

Any idea of what's going on? I turned all sorts of logging on and I keep seeing: "Skipping empty log_xid" messages in error log, but not affecting the tables im testing so not sure this is related..

Im talking about innodb tables by the way, that where inherited from mysql 5.5 (not percona mysql). Attaching the status of wsrep_ of a single node, the other 2 are very similar. At the very least there should be some logging indicating where the problem is, the debug level logs of galera are silent about these queries!

wsrep status: gist.github.com/anonymous/fdb2501ee146fba99c5d

dump of mysql variables (i have too many config files): gist.github.com/anonymous/d6bae549ea9d4bae9ea6

error log: gist.github.com/anonymous/8050335826568394cbf3

Table restraints to limit value insertions based off of other values in a row

Posted: 17 May 2013 08:21 PM PDT

Using SQL Server 2008 R2

Is it possible to limit what values are allowed in a coulumn based off of other values in the row.

Ex: myTable:

  ID, Test_mode, Active  1 , 1        , Null  2 , 0        , 1  3 , 1        , 0  

Is there a way to either change the value of Test_mode to 0 if a 1 is inserted into Active

OR

If Test_mode is 1 not allow insertion/update of Active

OR

Throw some kind of error if Test_mode is 1 and an insertion/update of Active is attempted.

Active can only be NULL, 1, 0, AND only 1 with Test_mode as 0.

I hope this makes sense, if not let me know and Ill update the question.

MySQL Insert Row With Foreign Key Equal to ID - Circular Reference

Posted: 17 May 2013 03:18 PM PDT

In MySQL database I have an events table which has a circular reference to itself with the following columns.

id (PK) | user_id (FK) | event_id (FK) | title  

Each new row inserted has a circular reference to another row in the table. The problem is how do I create the first event for each user?

I have tried manually inserting the IDs so i know it is possible to have an event reference itself. For example if i send an explicit query with id = 1, user_id = 1, event_id = 1, title = test then the row is added fine and then adding other events for that user is then straightforward.

The problem is that this solution is only appropriate for testing. In production, when a user wants to create their first event, how can I construct a query so it sets the event_id to the same value as the new auto increment value on the id column?

To clarify, the first event for a user will throw a foriegn key constraint error as I do not know the event_id that should be set.

As requested

CREATE TABLE `events` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `user_id` int(10) unsigned NOT NULL,    `event_id` int(10) unsigned NOT NULL,    `title` varchar(200) NOT NULL,    PRIMARY KEY       (`id`),    UNIQUE KEY `events_event_id_title_unique`       (`event_id`,`title`),    KEY `events_user_id_foreign`       (`user_id`),    CONSTRAINT `events_event_id_foreign`      FOREIGN KEY (`event_id`)       REFERENCES `events` (`id`) ON DELETE CASCADE,    CONSTRAINT `events_user_id_foreign`       FOREIGN KEY (`user_id`)       REFERENCES `users` (`id`) ON DELETE CASCADE  ) ENGINE=InnoDB     AUTO_INCREMENT=14     DEFAULT CHARSET=utf8 ;  

packages that php need to connect to mysql

Posted: 17 May 2013 01:24 PM PDT

I want to know minimum packages that php need to connect to mysql? I know some packages such as php5-mysql, mysql-common, mysql-client,... Also I want to know a short description about architecture of connection php and mysql, for example what does php5-mysql or mysql-common,... do?

MySQL Master/Slave without populating tables

Posted: 17 May 2013 01:33 PM PDT

So I have some Zabbix instances which are using MySQL for their backends. What I would like to do is, for DR purposes, backup the DBs for each instance using a Master/Slave configuration, with the slave node sitting at the DR site. However, I don't want to capture trends/historical data, all I want is configuration data. Is there a way to replicate Zabbix's database without populating the tables which hold all of the applications historical monitoring metrics?

Add new shard - always best?

Posted: 17 May 2013 12:56 PM PDT

In our setup, we currently have 3 shard set sharded cluster, each shard being a replica set of 3. Our writes are about to go up significantly to implement a new feature, and we know the extra data will be necessary. The nature of our writes are basically all upserts(which will likely be updates) and updates where we increment a particular field by 1.

Our updates are always being incremented by 1 and the way our data is distributed, not all documents are treated equally, some get their fields incremented a lot more. An alternative solution that I thought could be effective is to have some type of middle man, like a few Redis databases (or some smaller mongods) where we do the updates to them first and after about 5 minutes (or use some queueing system), we have a bunch of workers consume the data and update the actual live cluster with the documents. This would save our main cluster a ton of writes as it would allow certain update heavy documents to accumulate their updates and could save us a ton of writes (exact numbers I will post shortly in an edit).

So bottom line, when is adding another shard not the right solution?

Sudden increase in log_file_sync waits

Posted: 17 May 2013 03:08 PM PDT

I'm on Oracle 11gR2 with a 2 node RAC system. It's shared fiber storage to an EMC Clariion.

Last friday things went bad..fast. All of the sudden processes that normally ran fine for years became very, very slow. I noticed a sudden increase in log_file_sync waits and the LGWR process is listed as a blocker for several processes. Nothing changed on that Friday that we're aware of. Also, it appears to be just on one node.

Statspack reports confirm that log_file_sync wait time went from around 1ms to 47ms ! Additionally statspack shows this - meaning some of them are waiting a lot:

                           Total ----------------- % of Waits ------------------  Event                      Waits  <1ms  <2ms  <4ms  <8ms <16ms <32ms  <=1s   >1s  -------------------------- ----- ----- ----- ----- ----- ----- ----- ----- -----  log file sync               100K    .0    .3   1.7   9.0  25.4  31.0  32.4    .1  

And before it was this:

                           Total ----------------- % of Waits ------------------  Event                      Waits  <1ms  <2ms  <4ms  <8ms <16ms <32ms  <=1s   >1s  -------------------------- ----- ----- ----- ----- ----- ----- ----- ----- -----  log file sync              1589K  72.3  20.4   5.4   1.2    .6    .1    .0  

What can cause this? What should I be checking for?

"relation does not exist" trying to import mysql dump into postgres

Posted: 17 May 2013 08:18 PM PDT

environment:

ubuntu 10.04  mysql server 5.1.69   postgres 9.2  

Here's the sequence of steps:

  1. created a new postgres database, myDatabase
  2. executed this command: mysqldump -u root -p --compatible=postgresql myDatabase > mydump.sql
  3. executed this command: psql -h localhost -d myDatabase -U postgres
  4. At postgres prompt, executed these commands:

    SET standard_conforming_strings = 'off';
    SET backslash_quote = 'on';
    \i mydump.sql;

At this point, the process began, and after getting tons of

HINT:  Use the escape string syntax for escapes, e.g., E'\r\n'.  psql:mydump.sql:196: WARNING:  nonstandard use of escape in a string literal  LINE 1: ...012053',50.18,50.36,49.4,49.69,59592,184824),(115,'\"2012060...  

I get

psql:mydump.sql:196: ERROR:  relation "pricedata" does not exist  LINE 1: INSERT INTO "pricedata" VALUES (113,'19981102',0.6275,0.6275...  

I checked the mydump.sql file, and it has a CREATE TABLE "pricedata", but obviously this didn't have the desired effect on the postgres db.

I'm not sure what I'm supposed to do to get this to work. Any assistance would be greatly appreciated.

How do I add a index column to a database with non unique columns

Posted: 17 May 2013 12:04 PM PDT

I have a database with non unique data in columns, How do I add a index column or add a primairy key column?

Candidate key = Key = Minimal Superkey?

Posted: 17 May 2013 11:56 AM PDT

I got a little confused by all these keys. As I understand

A key of a relational schema =   The minimal subset of a superkey that is still a key =   A candidate key  

Is this correct or not?

import openoffice .ods file with specific structure like date and value fields

Posted: 17 May 2013 01:19 PM PDT

How I can import with keeping the structure of the data like date fields and numiric fields the rest of the fields is text. With import into mysql with import function from phpmyadmin from an .ods spreadsheet all is converted to text fields.

Gathering data from independent databases to a central one

Posted: 17 May 2013 11:13 AM PDT

I need to do the following. In a LAN there are 4 postgres boxes each having a small postgres database. These servers are gathering data from the internet and insert them as records in a table in their db. I need to setup a server that will gather the data from each table from those 4 servers and merge the records in one table of the same structure. This table will be then used for centralized processing. The 4 servers fill their table at about 10 records per minute and there is the requirement that at least once a minute each server's new data must be propagated to the "master". Data in the servers must be held for some time (~ one month). The tables have a field of type UUID so uniqueness across all servers is not a problem. But I dont know what would be dest practice in my scenario:

  1. Use dblink on master to pull from servers, with a psql crontab or pgAgent. In this scenario there is the following problem. I need somehow to update servers records that they have been pulled so that the dblink query can have a condition so that it doesnt fetch the whole table each time. There is the advantage that I can fetch all the rows and insert the ones that dont exist, but it is not acceptable that I will have to query for the whole table.Can I somehow do an update with dblink? Is generally this solution recommended for my scenario?

  2. Use COPY TO at the servers, mark the rows, scp the file to the master and then COPY FROM at the master. I dont have any strong objection about this, other than it will be more complicated than the above since it needs separate recurring jobs in the 4 servers and in the master and I would like more experience people to comment on it, before I use it.

The obvious solution of having the servers update the master directly is not an option because it creates a single point of failure for the 4 servers and anyway it is a given condition to me that the servers update their own database instance.

Any comments, other solutions?

(All the servers are fedora 17 32bit / postgres 9.2)

EDIT: I started investigating replication solutions but they seem overkill for my case since I dont need any fancy failover or conflict management, but maybe I am mistaken.

Performance of large queries on low bandwidth connections

Posted: 17 May 2013 11:10 AM PDT

I've been running some measurements on bandwidth needs and the effects of latency on query performance against Postgres 8.3 (waiting for management to approve upgrade) on Windows. SELECTs of various quantities of rows and some INSERTs with bandwidth of 25 Mbps, 5 Mbps, and 1 Mbps, with latency of 50,40,30,20,10.

For selects of 1, 3, and 250 rows the performance was basically the same with each bandwidth, and the same effects of latency. However, for a select of 7100 rows, the 1 Mbps situation had identical performance across each latency value. Additionally, while 25 and 5 Mbps performance was linearly correlated to difference in rows (28x rows, 28x time), the 1 Mbps queries are at about 44x.

So, I was curious if Postgres has some built in throttling that would mask the effects of latency. This may be more of a networking question, but I don't think so. Seems to be specific behavior to larger queries on slower connections.

Performance of database with transactional data

Posted: 17 May 2013 11:06 AM PDT

There are lots of row modifications (several thousands of rows per minute) in our PostgreSQL database that stores transactional data. We have a problem because PostgreSQL runs vacuuming process and this very slow down our database performance. I am looking for the most efficient solution how to solve this problem. Could be a solution some of these points?:

  • Separate these transactional data into another PostgreSQL database?
  • Change PostgreSQL for MySQL for better performance?

MySQL : Query WHERE clause and JOIN

Posted: 17 May 2013 01:53 PM PDT

I'm having an issue using a WHERE clause and JOIN.

SELECT * FROM `CallDetailRecord` WHERE `StartTime` >=1357102799000 AND `StartTime` <=1357880399000 JOIN `CallEvent` ON `EventID` = `CallEventID`      Error Code: 1064  You have an error in your SQL syntax; check the manual that corresponds to your MySQL       server version for the right syntax to use near 'JOIN `CallEvent` ON `EventID` = `CallEventID`   LIMIT 0, 1000' at line 1    Execution Time : 0 sec  Transfer Time  : 0 sec  Total Time     : 0.004 sec  ---------------------------------------------------  

I'm just trying to limit data set by time "starttime', but I get query error and it refers to join.

Any ideas? I will be happy to provide more info if required.

MySQL Federated tables and Triggers

Posted: 17 May 2013 11:12 AM PDT

This is the scenario.

Have two MySQL servers (S1, S2) on different machines, with a database on each (DB1, DB2).

I have a table (T2) on DB2 that needs to "fectch" rows from another table (T1) on DB1.

I have created a temporary table (base on DB1 and federated on DB2), so when I insert a row on T1 wich complies with some requirements I copy to my DB1 temporary table.

Thats OK.

With the federated table on DB2 I get the row I need for T2.

I supposed that with a trigger on the federated temporary table of DB2 it's possible to insert that row to T2, but the trigger never fires.

Any suggestion?

Re enable Windows Authetication in SQL Server

Posted: 17 May 2013 02:34 PM PDT

My old employee has disabled Windows Authentication in our server. Now I'm not able to access the SQL Server even though I have Administrator access to the server. I need to reset the sa password.

I tried logging in using single user mode as Remus described but I get the following error:

Login failed for user 'SERVER\Administrator'.
Reason: The account is disabled.
(Microsoft SQL Server, Error: 18470)

TokuDB/InnoDB Question

Posted: 17 May 2013 11:22 AM PDT

What's the faster scheme for insert data to TokuDB or InnoDB Engine? Over 200 Insert per second.

That's okay to answer of one engine.

TABLE: `game`  `gameId` bigint 20 (PK)  `gameMapId` int 4  `gameType` varchar 20  `createDate` datetime (index)  

TABLE: `game`  `idx` bigint 20 (autoincrement PK)  `gameId` bigint 20 (UK)  `gameMapId` int 4  `gameType` varchar 20  `createDate` datetime (index)  

Auditing specific database events in SQL Server

Posted: 17 May 2013 12:54 PM PDT

For auditing purposes, we have a requirement in which we have to enable Auditing for theses events:

  • DBA Logins
  • Changes made by DBAs LIKE INSERT, DELETE, UPDATE, ALTER (etc.)

Changes made by our application are stored in tables (CreatedBy, ModifiedBy, CreatedOn, ModifiedOn) and row changes are stored in XML in a specific table so we won't need to log changes made by our application.

Previously we had C2 Audit Mode temporarily enabled, but due to data volume and performance considerations and having these requirements in mind we considered it excessive and disabled it.

Enabling C2 Audit mode is fairly easy, how can i configure the database to perform this kind of logging ?

Additional Notes:

  • Currently our server uses SQL Server 2008 R2 Enterprise but since we have mostly data storage and reporting services we'll be downgrading to SQL Server 2008 R2 Standard .

Large INSERTs performance optimization

Posted: 17 May 2013 01:04 PM PDT

I have 15 Amazon AWS EC2 t1.micro instances which simultaneously populate Amazon RDS MySQL d2.m2.xlarge database with data using large INSERTs (40000 rows in query).

The queries are send continuously. The table is INNODB, two INT columns, there is index for both columns. CPU Utilization of RDS instance is about 30% during data receiving.

When I have one EC2 instance, the speed is in orders of magnitude faster than I run 15 instances simultaneously. In light of this, the 15-instances group works slower and slower until the speed becomes totally unsatisfactory.

How can I optimize performance for this process?

UPDATE

My SHOW CREATE TABLE results is the following:

CREATE TABLE `UserData` (   `uid` int(11) NOT NULL,   `data` int(11) NOT NULL,   PRIMARY KEY (`uid`,`data`),   KEY `uid` (`uid`),   KEY `data` (`data`)  ) ENGINE=InnoDB DEFAULT CHARSET=latin1  

I need 2 indexes because it is necessary for me to fetch data by uid and by data value.

I insert data with INSERT INTO UserData (uid, data) VALUES (1,2),(1,3),(1,10),... with 40000 (uid,data) pairs.

15 parallel instances insert ~121 000 000 rows in 2 hours, but I am sure that it can be much more faster.

MySQL Hanging Completely when `ALTER TABLE... ENABLE KEYS`

Posted: 17 May 2013 12:39 PM PDT

I know very little about database administration but I have to deal with some very large tables on my site.

This server has 64GB of RAM and Intel Core i7-3820 (4 x 3600 MHz). Most of everything it does is MySQL. I use half MyISAM and half InnoDB tables.

I have a couple of tables in MyISAM with billions of rows. Every day I have a script which disables keys, add a few million more rows, then enables keys again. The ALTER TABLE... ENABLE KEYS causes the server to basically stop for a couple of hours. No web site which uses MySQL will load at all, even though they are not accessing the tables being altered at all.

Please also advise me on how to setup the my.cnf file to fix this issue and optimize for rebuilding these indexes as fast as possible. Someone told me to increase the key_buffer_size, but I'm unsure about whether this is good as everyone seems to have a different opinion..? Currently it looks like this:

[client]  port        = 3306  socket      = /var/lib/mysql/mysql.sock    [mysqld]  port = 3306  socket = /var/lib/mysql/mysql.sock  skip-external-locking  max_allowed_packet = 512M  table_open_cache = 1024  sort_buffer_size = 128M  read_buffer_size = 2M  read_rnd_buffer_size = 8M  myisam_sort_buffer_size = 24G  thread_cache_size = 12  query_cache_size = 256M  thread_concurrency = 16  log-bin=mysql-bin  binlog_format=mixed  server-id   = 1  innodb_file_per_table = 1  table_cache = 1024  key_buffer = 256M  key_buffer_size = 12G  myisam_repair_threads = 4  big-tables  bind-address = 127.0.0.1  max_connections = 400  tmp_table_size = 4G  max_heap_table_size = 4G  log_bin = /backup/mysql-bin-logs/mysql-bin.log  expire_logs_days        = 10  max_binlog_size         = 100M  innodb_buffer_pool_size = 12G  local-infile=1  net_read_timeout = 1800  net_write_timeout = 1800      [mysqldump]  quick  max_allowed_packet = 16M    [mysql]  no-auto-rehash  local-infile=1    [myisamchk]  key_buffer_size = 256M  sort_buffer_size = 256M  read_buffer = 2M  write_buffer = 2M  key_buffer = 256M    [mysqlhotcopy]  interactive-timeout  

MySQL Version

innodb_version 5.5.30  protocol_version 10  version 5.5.30-log  version_comment MySQL Community Server (GPL) by Remi  version_compile_machine x86_64  version_compile_os Linux  

UPDATE

I've started a bounty. I changed some of the my.conf settings (also updated in this post). Then when I tried to rebuild the indexes on the large table it started out with Repair with 8 threads (even though the number of repair threads is set to 4) and then when I checked it a few hours later the same command was on Repair with keycache, which is now where it's sitting. So somehow it degraded down from the sort to keycache method (I have no idea why!)

Please help me optimize this! It's supposed to run every day but currently takes several days just to ALTER TABLE... ENABLE KEYS.

Here are some other variables I was asked for, which I do not understand but might help you help me:

+----------------------+-------+  | Variable_name        | Value |  +----------------------+-------+  | key_cache_block_size | 1024  |  +----------------------+-------+  +-----------------+-------------+  | Variable_name   | Value       |  +-----------------+-------------+  | key_buffer_size | 12884901888 |  +-----------------+-------------+  +-------------------+-------+  | Variable_name     | Value |  +-------------------+-------+  | Key_blocks_unused | 0     |  +-------------------+-------+  

How can I convert an Oracle dump file into SQL Server?

Posted: 17 May 2013 09:08 PM PDT

I want to ask about converting Oracle dump files (.dmp) into SQL Server files (.bak) where during conversion I don't have to be connected to any database server.

I've searched for related technologies, such as Oradump to SQL Server. Do you have another suggestion to solve this? Open source ones I mean.


Thanks for both of your response. I see how difficult it will, but is there any possibility to use another way in converting oracle dump file? because all of solution's converter tools always provide a connection database server. I'm so thankful what if you can suggest another tool. thanks anyway

MySQL users corrupt

Posted: 17 May 2013 11:08 AM PDT

I have a strange situation here:

From time to time I cannot log in with any of my mysql users. I even cannot make a mysql dump.

So I started searching in mysql files and I found that users.MYD and users.MYI are modified in the time when the login problem occurs. The only way to return everything to work is to restore the users.* files from the time when the system was running okay.

I searched about the problem and I found that there was some bug in the MySQL, but it was in the older versions (4.X). I'm running 5.5.

Any ideas? Thanks!

I have multiple sources INSERTing into a MySQL innodb table. My periodic aggregation script never makes accurate aggregates. Why?

Posted: 17 May 2013 12:08 PM PDT

I apologize in advance if this is a repeat. I'm not really sure how to properly ask for what I'm running into.

I have a large InnoDB table set up. I have 3 sources that all INSERT concurrently at a moderately high volume (300-500 INSERT/s).

I have a PERL script running on a cron job every 5 minutes that aggregates data from the last time it ran. It keeps track of the last record it processed by storing the auto_increment value of the last row in a metadata table.

The aggregates are always off. But not by much. This has been so frustrating because it's just plain simple math (SELECT account_id,sum(sold) GROUP BY account_id). I have a suspicion that it has something to do with the transaction isolation (repeatable-read).

I recently found FlexViews which looks very cool and might address my problem. But I was wondering if anyone could:

  • Tell me what is going on here. Why is this happening?
  • What can be done to produce 100% accurate aggregates

I'm like 95% sure the auto_increment thing along with transaction commit ordering is hosing me up, but I don't understand enough about DBs to really ask the right question.

Oh, one thing to note, I've already checked over the field types. This issues isn't the result of rounding.

MySQL ignoring some params from my.cnf

Posted: 17 May 2013 12:12 PM PDT

I have a virtual box with 2 installed instances of MySQL.

First instance:

- /etc/mysql/my.cnf  - /var/lib/mysql  - port = 3306  

Second instance:

- /etc/mysql2/my.cnf  - /var/lib/mysql2  - port = 3307  

I can connect to both. The first instance is fine. The second is not.

MySQL doesn't ignoring some params from my.cnf file (socket, port, pid-file, data-dir...). But if I'm trying to change something else: cache, buffers, log files it completely ignores changes after restarting.

I need to make MySQL read all params from my.cnf.

I tried: - to move my.cnf from /etc/mysql to /var/lib/mysql; - commenting out !includeir /etc/mysql/conf.d.

Could someone guide me?

PostgreSQL is running locally but I cannot connect. Why?

Posted: 17 May 2013 01:49 PM PDT

Recently updated my machine from Mac OS X Lion (10.7.4) to Mountain Lion (10.8) and I think it borked my PostgreSQL installation. It was installed originally via Homebrew. I'm not a DBA, but hoping someone can tell me how to troubleshoot this.

I am unable to connect (but was able to before pre-Mountain Lion):

$ psql -U rails -d myapp_development  psql: could not connect to server: No such file or directory      Is the server running locally and accepting      connections on Unix domain socket "/var/pgsql_socket/.s.PGSQL.5432"?  

But Postgres is still clearly running:

$ ps aux | grep postgres  meltemi          2010   0.0  0.0  2444124   5292   ??  Ss   Wed01PM   0:00.02 postgres: rails myapp_development [local] idle      meltemi           562   0.0  0.0  2439312    592   ??  Ss   Wed12PM   0:02.28 postgres: stats collector process         meltemi           561   0.0  0.0  2443228   1832   ??  Ss   Wed12PM   0:01.57 postgres: autovacuum launcher process         meltemi           560   0.0  0.0  2443096    596   ??  Ss   Wed12PM   0:02.89 postgres: wal writer process         meltemi           559   0.0  0.0  2443096   1072   ??  Ss   Wed12PM   0:04.01 postgres: writer process         meltemi           466   0.0  0.0  2443096   3728   ??  S    Wed12PM   0:00.85 /usr/local/bin/postgres -D /usr/local/varpostgres -r /usr/local/var/postgres/server.log  

And it's responding to queries (both to a test db and the development db) from a local Rails app

  User Load (0.2ms)  SELECT "users".* FROM "users"     Rendered users/index.html.haml within layouts/application (1.3ms)  

There appears to be no /var/pgsql_socket/ directory, let alone the /var/pgsql_socket/.s.PGSQL.5432 socket file mentioned above!?! Maybe the install of Mountain Lion wiped that out?

$ ls -l /var/ | grep pg  drwxr-x---   2 _postgres  _postgres    68 Jun 20 16:39 pgsql_socket_alt  

How can I troubleshoot this?

Why are numbers tables "invaluable"?

Posted: 17 May 2013 12:53 PM PDT

Our resident database expert is telling us that numbers tables are invaluable. I don't quite understand why. Here's a numbers table:

USE Model  GO    CREATE TABLE Numbers  (      Number INT NOT NULL,      CONSTRAINT PK_Numbers           PRIMARY KEY CLUSTERED (Number)          WITH FILLFACTOR = 100  )    INSERT INTO Numbers  SELECT      (a.Number * 256) + b.Number AS Number  FROM       (          SELECT number          FROM master..spt_values          WHERE               type = 'P'              AND number <= 255      ) a (Number),      (          SELECT number          FROM master..spt_values          WHERE               type = 'P'              AND number <= 255      ) b (Number)  GO  

Per the blog post, the rationale given is

Numbers tables are truly invaluable. I use them all of the time for string manipulation, simulating window functions, populating test tables with lots of data, eliminating cursor logic, and many other tasks that would be incredibly difficult without them.

But I don't understand what those uses are, exactly -- can you provide some compelling, specific examples of where a "numbers table" saves you a ton of work in SQL Server -- and why we should have them?

No comments:

Post a Comment

Search This Blog