Sunday, June 30, 2013

[how to] Clustered tables vs non-clustered (IOT vs heap)

[how to] Clustered tables vs non-clustered (IOT vs heap)


Clustered tables vs non-clustered (IOT vs heap)

Posted: 30 Jun 2013 02:40 PM PDT

It happened I had to work with both SQLServer and Oracle for quite a while (thankfully not at the same time).

What still puzzles me is the approach to storing tables as balanced trees. In Oracle-like RDMS heap is default, in SQLServer (and many others ) the reverse (clustered, IOT) is true. Adepts of each approach claim their way is the only 'correct' and support chosen point of view with bunch of tests/presentations. However, in my opinion, the only point they proved is that implementation of "non-default" approach is poor, and it shouldn't be used for the most cases...

I'm pretty sure both approaches are good enough (just because they still exist on the market and show comparable performance) and have some math underneath , but I'm failed to find any good references.

I realize the topic may be too broad to answer, and good links are very welcome, but I really want to know why two seemingly controversial approaches have proven they are both valid.

Thank you

How/Where to get date/time/calendar data to store in a table?

Posted: 30 Jun 2013 08:18 PM PDT

I cannot recall where I got the data before so I'm asking here:

Where can I get or how can I generate standard date/time/calendar data?

The table would consist of columns such as:

main_date -- generate date mm/dd/yyyy  date_id -- yyyymmdd  day -- Monday, Tuesday  time -- hh:ss  time_id -- 120000 (or some int equivalent for each minute/second?)  

and a few more columns maybe like the year, month, date (dd) - etc - which I cannot remember at the moment. Then there are, if I recall correctly, columns that cover timezone changes --

I'm not sure if the data was created by a script made by another DBA in my previous job, or if it is available on the internet - I can't seem to find it, but I am likely to believe it is somewhere out there. It could also have been a table already created and just ready for download (script?) -- The data I'm referring to doesn't change. Say, for example, 2013-06-30 will forever be a Sunday - nothing will change that. 12:00 PM will always be 12:00 PM. There are 24 hours each day - that's also constant.. I apologize if this is vague..

I'm really looking for a table that can be looked-up since the values per day per year will never change.

Questions regarding second normal form

Posted: 30 Jun 2013 10:16 AM PDT

I understand that in order to fulfill 2.NF, attributes must not be dependent on part of the key. Now, the question is, let's say we have a relation R with a set of attributes {A,B,C,D,E,F,G,H,I,J,K} and its functional dependencies {A→GH, B→IJ, C→A, F→B, FC→DK, K→E} and the candidate keys C AND F. Do these FDs like C→A violate 2.NF because it is not fully functionally dependent on BOTH F AND C ?

So the question is

Do attributes in functional dependencies need to be dependent on the whole set of candidate keys - in this case FC ? Or is the dependency on one of the keys sufficient to fulfill 2.NF ?

What is the best way to recover from a mysql replication fail?

Posted: 30 Jun 2013 05:25 PM PDT

Today, the replication between our master mysql db server and the two replication servers dropped. I have a procedure here which was written a long time ago and i'm not sure it's the fastest method to recover for this issue. I'd like to share with you the procedure and I'd appreciate if you could give your thoughts about it and maybe even tell me how it can be done quicker.

At the master:    RESET MASTER;  FLUSH TABLES WITH READ LOCK;  SHOW MASTER STATUS;  And copy the values of the result of the last command somewhere.    Wihtout closing the connection to the client (because it would release the read lock) issue the command to get a dump of the master:    mysqldump mysq  Now you can release the lock, even if the dump hasn't end. To do it perform the following command in the mysql client:    UNLOCK TABLES;  Now copy the dump file to the slave using scp or your preferred tool.    At the slave:    Open a connection to mysql and type:    STOP SLAVE;  Load master's data dump with this console command:    mysql -uroot -p < mysqldump.sql  Sync slave and master logs:    RESET SLAVE;  CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=98;  Where the values of the above fields are the ones you copied before.    Finally type    START SLAVE;  And to check that everything is working again, if you type    SHOW SLAVE STATUS;  you should see:    Slave_IO_Running: Yes  Slave_SQL_Running: Yes  That's it!  

At the moment i'm in the stage of copying the db from the master to the other two replication servers and it takes more than 6 hours to that point, isn't it too slow? The servers are connected through a 1gb switch.

Extract MySQL information from a compressed tar.gz file

Posted: 30 Jun 2013 07:58 AM PDT

I have a back up-of a MySQL database which is contained in a .tar.gz file - I need to restore it.

I've never had to do this before. The only way I can access the database is through phpmyadmin. There is an 'import' feature in phpmyadmin, but this doesn't accept files in the tar.gz format, only sql.

Is there any way I can convert from .tar.gz to sql? I know I can upload from an sql file without a problem.

I don't have access to any 'back-end' stuff on the web-server, so I'm limited with what I can do locally on my PC.

Any help is much appreciated.

MySQL always crash on start, ibdata1 bigger than 4GB

Posted: 30 Jun 2013 02:13 AM PDT

I'm having a problem with MySQL when testing my production data. Whenever I started MySQL (mysqld), it always crash. My suspect is on my InnoDB data file (ibdata1) since the size is already bigger than 4GB. I'm using Windows XP 32-bit, memory 3GB.

How can I start my server normally? I already tried to search on the Internet, and found out that we can actually make more than one ibdata by changing the config file (my.ini), but when I change the config file, I got "Error 1067: The process terminated unexpectedly"

How can I fix this problem? Thank you.

EDIT

Error log:

Log file:

130630 13:33:53 [Note] Plugin 'FEDERATED' is disabled.  130630 13:33:53 InnoDB: The InnoDB memory heap is disabled  130630 13:33:53 InnoDB: Mutexes and rw_locks use Windows interlocked functions  130630 13:33:53 InnoDB: Compressed tables use zlib 1.2.3  130630 13:33:53 InnoDB: Initializing buffer pool, size = 47.0M  130630 13:33:53 InnoDB: Completed initialization of buffer pool  130630 13:33:54 InnoDB: highest supported file format is Barracuda.  InnoDB: Log scan progressed past the checkpoint lsn 30597818328  130630 13:33:54  InnoDB: Database was not shut down normally!  InnoDB: Starting crash recovery.  InnoDB: Reading tablespace information from the .ibd files...  InnoDB: Restoring possible half-written data pages from the doublewrite  InnoDB: buffer...  InnoDB: Doing recovery: scanned up to log sequence number 30601590452  130630 13:33:56  InnoDB: Starting an apply batch of log records to the database...  InnoDB: Progress in percents: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99   InnoDB: Apply batch completed  130630 13:33:58  InnoDB: Waiting for the background threads to start  130630 13:33:59 InnoDB: 1.1.8 started; log sequence number 30601590452  130630 13:33:59 [Note] Event Scheduler: Loaded 0 events  130630 13:33:59 [Note] C:\Program Files\MySQL\MySQL Server 5.5\bin\mysqld: ready for connections.  Version: '5.5.17'  socket: ''  port: 3306  MySQL Community Server (GPL)  InnoDB: Database page corruption on disk or a failed  InnoDB: file read of page 134995.  InnoDB: You may have to recover from a backup.  130630 13:34:00  InnoDB: Page dump in ascii and hex (16384 bytes):  len 16384; hex **deleted, too long**  InnoDB: End of page dump  130630 13:34:01  InnoDB: Page checksum 580897364, prior-to-4.0.14-form checksum 4142776698  InnoDB: stored checksum 1320496436, prior-to-4.0.14-form stored checksum 4142776698  InnoDB: Page lsn 7 512426390, low 4 bytes of lsn at page end 512426390  InnoDB: Page number (if stored to page already) 134995,  InnoDB: space id (if created with >= MySQL-4.1.1 and stored already) 0  InnoDB: Page may be an index page where index id is 3938  InnoDB: (index "xxxx" of table "xxxx"."xxxx")  InnoDB: Database page corruption on disk or a failed  InnoDB: file read of page 134995.  InnoDB: You may have to recover from a backup.  InnoDB: It is also possible that your operating  InnoDB: system has corrupted its own file cache  InnoDB: and rebooting your computer removes the  InnoDB: error.  InnoDB: If the corrupt page is an index page  InnoDB: you can also try to fix the corruption  InnoDB: by dumping, dropping, and reimporting  InnoDB: the corrupt table. You can use CHECK  InnoDB: TABLE to scan your table for corruption.  InnoDB: See also http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html  InnoDB: about forcing recovery.  InnoDB: Ending processing because of a corrupt database page.  130630 13:34:01  InnoDB: Assertion failure in thread 3212 in file buf0buf.c line 3660  InnoDB: We intentionally generate a memory trap.  InnoDB: Submit a detailed bug report to http://bugs.mysql.com.  InnoDB: If you get repeated assertion failures or crashes, even  InnoDB: immediately after the mysqld startup, there may be  InnoDB: corruption in the InnoDB tablespace. Please refer to  InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html  InnoDB: about forcing recovery.  130630 13:34:01 - mysqld got exception 0xc0000005 ;  This could be because you hit a bug. It is also possible that this binary  or one of the libraries it was linked against is corrupt, improperly built,  or misconfigured. This error can also be caused by malfunctioning hardware.  We will try our best to scrape up some info that will hopefully help     diagnose  the problem, but since we have already crashed, something is definitely wrong  and this may fail.  

Huge 0.000001 and 0.000002 files and so on, in mysql data folder

Posted: 30 Jun 2013 12:23 AM PDT

In mysql data folder unknown files were created like this: enter image description here

This is my.ini content:

[client]  host= .  port= 3306  socket= "MySQL"    [mysql]  no-auto-rehash      [mysqld]    max_allowed_packet = 800M  thread_concurrency = 8  skip-external-locking  port=3306  basedir="c:/zpanel/bin/mysql/"  datadir="e:/hafez/bin/mysql/mysql5.5.24/data/"  character-set-server=utf8  default-storage-engine=MYISAM  max_connections=200  query_cache_size = 128M  table_cache=256  tmp_table_size=256M  thread_cache_size=8  myisam_max_sort_file_size=32G  myisam_sort_buffer_size=205M  key_buffer_size = 384M  read_buffer_size = 2M  read_rnd_buffer_size = 8M  sort_buffer_size = 2M          innodb_fast_shutdown=0  innodb_additional_mem_pool_size = 16M  innodb_log_buffer_size = 8M  innodb_buffer_pool_size=1800MB  innodb_log_file_size = 256M  innodb_thread_concurrency = 16  innodb_lock_wait_timeout = 120  innodb_file_per_table     innodb_doublewrite = 0  log-bin = 0  innodb_support_xa = 0  innodb_flush_log_at_trx_commit = 0      tmpdir= "e:/hafez/bin/mysql/mysql5.5.24/tmp/"    enable-named-pipe  skip-federated  server-id = 1    default-time-zone   = "SYSTEM"  log_error           = "c:/zpanel/logs/mysql/mysql.err"  pid_file            = "mysql.pid"  general_log         = 0  general_log_file    = "c:/zpanel/logs/mysql/mysql.log"  slow_query_log      = 0  slow_query_log_file = "c:/zpanel/logs/mysql/mysql-slow.log"      [myisamchk]  key_buffer_size = 1024M  sort_buffer_size = 1024M  read_buffer = 8M  write_buffer = 8M    [mysqldump]  quick  max_allowed_packet = 16M    [mysqlhotcopy]  interactive-timeout  

I recently added some innodb tables and these files appeared!

These files are in root mysql data folder(e:/hafez/bin/mysql/mysql5.5.24/data) and aren't in any database folder.

What are these files?

Is it safe to delete this?

Can't connect to my Postgres databases

Posted: 30 Jun 2013 09:35 AM PDT

*** SECOND UPDATE ***


I gave up and reset completely postgres, as suggested in this thread. Even after this reset I was not able to use the "main" cluster, I had to remove and reinstall it. Now it works, but I don't understand what happened.

I think that @Pjack was on the right track, the problem was probably be that postgres thought the clusters were already running, or the ports were already used. If the problem comes up again I'll investigate more deeply.


*** UPDATE ***


I tried also to remove and reinstall postgres and postgis but it doesn't work. This is what I've done

sudo apt-get remove postgis libarmadillo3 libdap11 libdapclient3 libepsilon0 libfreexl1 libgdal1 libgeos-3.3.8 libgeos-c1 libgfortran3 libhdf4-0-alt libhdf5-7 liblapack3 libnetcdfc7 libodbc1 libpq5 libproj0 libspatialite5 libxerces-c28 odbcinst odbcinst1debian2 postgresql-9.1 postgresql-9.1-postgis postgresql-client-9.1 postgresql-client-common postgresql-common proj-bin proj-data  

A strange message was showed up during uninstall:

Removing postgresql-common ...   * No PostgreSQL clusters exist; see "man pg_createcluster"  

Of course I have created a cluster when I installed postgres. I tried to install a new cluster 'test' and it works, but I obviously I can't access any database of the other clusters.



I can't connect anymore to my Postgres databases. Trying to access it gives me the usual error:

psql: could not connect to server: No such file or directory      Is the server running locally and accepting      connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5433"?  

Starting the postgres service gives me no error, and there's no error in var/log/postgresql/ , but checking the status gives me:

9.1/main (port 5432): down  9.1/postgres (port 5433): down  

while usually I have port 5433 up, and netstat -l | grep postg gives me no result.

From the last time it worked and now, I tried to dump OpenStreetMap data inside a postgis db. The only "dangerous" things that I could have done was changing these postgres settings:

shared_buffers = 944MB  checkpoint_segments = 20  maintenance_work_mem = 256MB  effective_cache_size = 1889MB  checkpoint_completion_target = 0.9  

and adding in /etc/sysctl.conf

kernel.shmmax=1980760064  kernel.shmall=483584  

as suggested on postgres documentation for tuning-up the db performance. Anyway I reverted all the changes, using the default values or commenting added lines.

The second thing I've done was that I executed osm2pgsql to dump the data, using:

sudo stop lightdm  osm2pgsql -r pbf -C 3200 --slim -G --tablespace-main-data main-data --tablespace-main-index main-index --tablespace-slim-data slim-data --tablespace-slim-index slim-index toscana.pbf  

I don't know if osm2pgsql did something dangerous, but I don't think so... What could have happened?

I'm using Postgres 9.1 + Postgis 2 on Lubuntu 13.04 32 bit on local, using a peer authentication and a Unix socket.

Need to install Oracle Express 11g Release 2 on a Windows 7 64-bit laptop

Posted: 30 Jun 2013 05:36 PM PDT

I need the Oracle 11g Release 2 sample schemas (HR, OE, etc.) in order to do most of the available online tutorials. I was hoping to install Oracle Express Edition on my Windows 7 laptop to get these; but I have never heard of anybody successfully installing Oracle XE on a 64-bit Windows platform.

Is there a version of Oracle XE 11g R2 available for Windows 7? And if so, could you please point me to it?

Thanks...

How to find parent rows that have indentical sets of child rows?

Posted: 30 Jun 2013 04:25 AM PDT

Suppose I have structure like this:

Recipes Table:        RecipeID      Name      Description    RecipeIngredients Table:        RecipeID      IngredientID      Quantity      UOM  

What are some good ways for finding duplicate recipes? A duplicate recipe is defined as having the exact same set of ingredients and quantities for each ingredient.

I've thought of using FOR XML PATH to combine the ingredients into a single column. I haven't fully explored this but it should work if I make sure the ingredients/UOMs/quantities are sorted in the same sequence and have a proper separator. Are there better approaches?

Edit: There are 48K recipes and 200K ingredient rows.

MySQL bin log missing data?

Posted: 30 Jun 2013 08:37 PM PDT

I'm trying to make heads and tails of my binary logs and am coming up short. I have many entries similar to the following from mysqlbinlog but I'm missing log data that I know should be there.

# at 1050548  #130617 18:40:03 server id 1  end_log_pos 1050619   Query   thread_id=844988    exec_time=0 error_code=0  SET TIMESTAMP=1371519603/*!*/;  BEGIN  /*!*/;  # at 1050619  # at 1050782  #130617 18:40:03 server id 1  end_log_pos 1050809   Xid = 33121028  COMMIT/*!*/;  

It's puzzling because I get expected SQL in the mysqlbinlog output for statements executed in phpmyadmin but those coming from other PHP-based remote web servers appear to not be recorded.

My settings bin logging are:

bin_log = /path_to_sql_incrementals  binlog_ignore_db = mysql  

Am I missing a logging option?

Mysql 5.0.95 / CentOS 5.9

Help my database isn't performing fast enough! 100M Merge with 6M need < 1 hour!

Posted: 30 Jun 2013 05:45 PM PDT

I have a server right now receiving more raw data files in 1 hour then I can upsert (insert -> merge) in an hour.

I have a table with 100M (rounded up) rows. Table is currently MyISAM. The table has 1000 columns mostly boolean and a few varchar.

Currently the fastest way i've found to get the information into my DB until now was:

Process raw data into CSV files. Load Data In File to rawData Table. Insert rawData table into Table1. (on dupe key do my function) Truncate rawData Repeat. Worked fine until im merging 6M+ Rows into 100M rows and expecting it to take under an hour.

I got 16G of ram so I set my Key_Buffer_Pool to 6G. I have my query cache pool to 16M I have my query cache limit to 10M I would just replace the information however it has to be an Upsert, Update the fields that are true if exists and insert if it does not.

Things im looking into atm; - Possibly switching server table to InnoDB? |-> Not sure about the performance, as the insert into an empty table is fine, its the merge that's slow.

Maybe allowing more table cache? Or even Query Cache? mysql sql mysqli innodb myisam

Merge Code:

b.3_InMarket = (b.3_InMarket OR r.3_InMarket),

To compare my 2 bool columns.

Update

  • Ok I set Raid0
  • Changed my query to Lock Write on tables when inserting
  • When importing csv im disabling keys then re-enabling them before upsert.
  • Changed concurrent_insert to 2

USER_ID field in alert logs (also in V$DIAG_ALERT_EXT view)

Posted: 30 Jun 2013 11:45 AM PDT

Does anyone know what triggers the USER_ID field in the log.xml to be populated? The value also exists in the V$DIAG_ALERT_EXT view.

I've found by observing the logs that if a temp tablespace fills up, it will log the USER_ID of the problematic SQL statement causing the issue. But other than that, it appears that value is always NULL.

How would I store the result of a select statement so that I can reuse the results to join to different tables?

Posted: 30 Jun 2013 03:45 AM PDT

How would I store the result of a select statement so that I can reuse the results to join to different tables? This will also be inside a cursor.

Below is some pseudo code, in this example I have kept the Select statement simple but in real life it is a long query with multiple joins, I have to use the identical SQL twice to join to 2 different tables and as it is quite long and can be changed in the future hence I want to be able reuse it.

I have tried creating a view and storing the results of the select statement in it but it seems I can't create a view inside the cursor loop, when I tried I am getting "Encountered the symbol "CREATE"" error.

DECLARE TYPE cur_type IS REF CURSOR;  CURSOR PT_Cursor IS        SELECT * FROM Table1        PT_Cursor_Row   PT_Cursor%ROWTYPE;    BEGIN   OPEN PT_Cursor;     LOOP      FETCH PT_Cursor INTO PT_Cursor_Row;      EXIT WHEN PT_Cursor%NOTFOUND;        Select ID From Table2 --this is actually a long complext query          INNER JOIN Table3 ON Table2.ID = Table3.ID      WHERE Table2.ID = PT_Cursor_Row.ID        Select * From Table2 --this is actually a long complext query          LEFT JOIN Table4 ON Table2.ID = Table4.ID      WHERE Table2.ID = PT_Cursor_Row.ID                                   END LOOP;     CLOSE PT_Cursor;  END;   

How long takes populating catalog in SQL Server 2008 R2 when update a row?

Posted: 30 Jun 2013 04:45 AM PDT

I have installed SQL Server 2008 R2 with FullText search feature installed.

I created a catalog and fulltext index on a table (let's say Table1). I used the following commands (sql queries):

USE MyDatabase  EXEC sp_fulltext_database 'enable'    CREATE FULLTEXT CATALOG [MY_CATALOG_] WITH ACCENT_SENSITIVITY = ON    CREATE FULLTEXT INDEX ON Table1 (Column1)      KEY INDEX PK_KEY_TABLE1_      ON MY_CATALOG_ WITH CHANGE_TRACKING AUTO;  

WITH CHANGE_TRACKING AUTO; means auto-populate, right ?

When I use a stored procedure to update a row from Table1 and I use search query like:

SELECT * FROM Table1 WHERE CONTAINS(Column1,'modifiedName')  

but no results were found.

I've checked the populate status is 0 (idle). I have to go in Sql Management studio and choose rebuild catalog. And then works (mean I found results what I expected)

My question: how long takes populating catalog in sql server ? My table (Table1) has 2000 rows.

Error installing SQL Server 2008 - all componets installed successfully except database engine

Posted: 30 Jun 2013 05:45 AM PDT

Below is the error I copied, please someone help me, I have tried at least 10 times in the last 3 days. It is SQL Server 2008 R2 and I wanted to install it on Windows 7.

Microsoft SQL Server 2008 Setup

The following error has occurred:

Wait on the Database Engine recovery handle failed. Check the SQL Server error log for potential causes.

For help, click: http://go.microsoft.com/fwlink?LinkID=20476&ProdName=Microsoft+SQL+Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=10.0.1600.22&EvtType=0xE53883A0%25400xBE03358B%25401306%254024

Connection pools being reset with Error: 18056, Severity: 20, State: 46. & Perfmon Counters not showing

Posted: 30 Jun 2013 06:45 PM PDT

We are using SQL authentication & .net 4.0 Connection strings to connect to an Enterprise Edition 2012 SP1 SQL Server on a windows 2008r2 Enterprise Server. We use about 50 Servers split into 8 different groups different parts of a website.

Our website is using this SQL Server to log Visit tracking data and over the last few days it has spat out the following messages about the resetting connection pools.

The client was unable to reuse a session with SPID 1327, which had been reset for connection pooling. The failure ID is 46. This error may have been caused by an earlier operation failing. Check the error logs for failed operations immediately before this error message.

Errorlog reads

Error: 18056, Severity: 20, State: 46.

The client was unable to reuse a session with SPID 959, which had been reset for connection pooling. The failure ID is 46. This error may have been caused by an earlier operation failing. Check the error logs for failed operations immediately before this error message.

Login failed for user 'xxxx'. Reason: Failed to open the database 'xxxxxxxx' configured in the login object while revalidating the login on the connection. [CLIENT: 10.xx.xx.xxx]

After some digging I found this document from CSS blog

http://blogs.msdn.com/b/psssql/archive/2010/08/03/how-it-works-error-18056-the-client-was-unable-to-reuse-a-session-with-spid-which-had-been-reset-for-connection-pooling.aspx

and this one by the scholar Aaron Bertrand (I know the error number is different but the failure ID is the same with a number of the messages are identical)

http://sqlblog.com/blogs/aaron_bertrand/archive/2011/01/14/sql-server-v-next-denali-additional-states-for-error-18456.aspx

Failure ID 46, suggests that the login did not have permissions. Our logins default to the master database and the db name is specificied in the connection string.

I wanted to check the number of connection strings pools, etc and checked all the counters in Perfmon for ".Net Data Provider for SqlServer" it only gave me the option of defaultdomain9675 for the instance so I selected that assuming that is a system generated ID name for our Datacentre network. Unfortunately all of the counters are reading zero. On one of our other main servers the connection pools are hovering around 10 which is what I expected to see on a healthy server with that kind of load.

My question is 3 fold

1- Can anyone suggest why the Windows 2008 R2 Server is not showing ".Net Data Provider for SqlServer"?

2- Has anyone experienced this as I obviously believe that the login not having permissions is a red herring ?

3- If different groups of web servers have the same connection string syntax but with slightly different whitespace, would this cause the server to use another connection pool ?

Bitmask Flags with Lookup Tables Clarification

Posted: 30 Jun 2013 07:45 PM PDT

I've received a dataset from an outside source which contains several bitmask fields as varchars. They come in length as low as 3 and as long as 21 values long. I need to be able to run SELECT queries based on these fields using AND or OR logic.

Using a calculated field, where I just convert the bits into an integer value, I can easily find rows that match an AND query, by using a simple WHERE rowvalue = requestvalue, but the OR logic would require using bitwise & in order to find matching records.

Given that I would need to work with several of these columns and select from hundreds of millions of records, I feel that there would be a huge performance hit when doing bitwise & operations to filter my SELECT results.

I came across this answer from searching and it looked like it may fit my needs, but I need some clarification on how it is implemented.

Is this as simple as creating a lookup table that has all possible search conditions?

Example for 3 bits using (a & b) (Edit: Wrong bitwise op)

001,001  001,011  001,101  001,111  010,010  010,011  010,110  011,011  011,111  etc  

The author mentions that it's counter-intuitive initially, but I can't help but feel I'm interpreting the solution incorrectly, as this would give me a single lookup table with likely billions of rows.

Any clarifications on the answer I linked above or other suggestions that would preserve the existing database are appreciated.

Edit: A more concrete example using small data.

Four flags, HasHouse,HasCar,HasCat,HasDog, 0000 is has none, 1111 is has all.

Any number of flags, from all to none, can be flipped, and results must be filtered where selection matches all (Using exact value comparison) or at least 1 (Using bitwise &).

Adding a single calculated column for each bitmask is ok, but adding a column for each bit for more than 100 bits, coupled with how to insert/update the data is why I'm trying to find alternative solutions.

SQL Server 2012 catalog.executions to sysjobhistory - any way to join them?

Posted: 30 Jun 2013 01:45 PM PDT

I have exhausted my resources and can't find a foolproof way to join the ssisdb.catalog tables to the jobs that run them. Trying to write some custom sprocs to monitor my execution times and rows written from the catalog tables, and it would be greatly beneficial to be able to tie them together with the calling job.

BIT columns all "1" after a phpMyAdmin export/import

Posted: 30 Jun 2013 03:45 PM PDT

I have to import data from a MySQL database using phpMyAdmin because that's the only interface my hosting provider supports.

I have exported my database from my local machine using phpMyAdmin. After that I imported the script file to my host. All of data in the columns that is BIT type are changed to '1'. Hosting database version is 5.5.29.

Is it a phpMyAdmin problem, or MySQL version problem? How can I fix this?

SQLite writing a query where you select only rows nearest to the hour

Posted: 30 Jun 2013 04:45 PM PDT

I've got a set of data where data has been taken approximately every minute for about three month and the time has been stored as a unix timestamp. There is no regularity to the timestamp (i.e. the zero minute of the hour may not contain a reading, 00:59:55 and the next measurement could be 01:01:01) and days may be missing.

What I need is the row nearest to the hour, with the timestep rounding to the hour, as long as the nearest value is not more than 30 minutes away from the hour.

Where a matching hour could not be found it would be helpful if the query could include a time but no value.

I realise I'm asking a lot, but this would be incredibly helpful Thanks for taking the time to read this. James

BTW, The table is just PK (autoincrement),timestamp,value, sensor id(FK). I've tried this to get the data out:

SELECT strftime('%S',time, 'unixepoch'),strftime('%M',time, 'unixepoch'),strftime('%H',time, 'unixepoch'), strftime('%d',time, 'unixepoch'), strftime('%m',time, 'unixepoch'), strftime('%Y',time, 'unixepoch'), value from Timestream where idSensor=359;  

Breaking Semisynchronous Replication in MySQL 5.5

Posted: 30 Jun 2013 08:45 PM PDT

I've set up Semisynchronous Replication between two MySQL 5.5 servers running on Windows 7.

My application is running and updating the database of the master server and same is being updated in the slave database server.

But due to some unknown reasons sometimes, Replication breaks.

On running the command:

SHOW STATUS LIKE 'Rpl_semi_sync%';  

It gives this status:

'Rpl_semi_sync_master_no_times', '0'  'Rpl_semi_sync_master_no_tx', '0'  'Rpl_semi_sync_master_status', 'ON'     <<-------------  'Rpl_semi_sync_master_timefunc_failures', '0'  'Rpl_semi_sync_master_tx_avg_wait_time', '338846'  'Rpl_semi_sync_master_tx_wait_time', '29479685'  'Rpl_semi_sync_master_tx_waits', '87'  'Rpl_semi_sync_master_wait_pos_backtraverse', '0'  'Rpl_semi_sync_master_wait_sessions', '0'  'Rpl_semi_sync_master_yes_tx', '3106'  

Ideally, in semi synchronization, when the sync breaks the status should come as OFF since master is not able to receive any acknowledgement from the slave. Please help us in this regard.

Mongo replication lag slowly increasing

Posted: 30 Jun 2013 08:45 AM PDT

I am running a replica-set in production with slaveOk = false using mongo 2.0.7 in AWS The replication lag on one of the servers is close to 58 hours. This replication lag is sometimes decreasing ( at very slow rate ) and sometimes increasing, but overall its lag is increasing 1-2 hours per day.

  • I restarted the server, but seeing no benefit
  • The read rate is 10 times of the read rate, compared to the other secondaries.
  • I checked the logs, but nothing weird
  • The lock % is very high ( close to 100% ) only on that server( and there is no such issues with the capped collection ). Other secondaries hardly have 10-20 lock%

One more weird thing that I noted about this server ( secondary ) is that mms is showing the version to be 2.2.1 and type as 'standalone' server, which is not the case ( cross checked using db.version() and rs.status() command )

Unable to start Oracle Database Server : Get an error as I try to start it

Posted: 30 Jun 2013 09:45 AM PDT

I just installed Oracle Database Express Edition 11g Release 2 for windows.It created a short cut icon on the desktop :

enter image description here

but as I click this icon I see this dialog box :

enter image description here

What is it ? How do I start my Oracle Database Server ?

SQL Server replication subscriptions marked as inactive

Posted: 30 Jun 2013 10:45 AM PDT

Is there any way to force SQL Server NOT to mark subscriptions as inactive, ever?

It happens sporadically when there are connection issues and I don't want to have to reinitialize the subscription every time.

Note, I'm not talking about the subscriptions being marked as expired...just as inactive.

Thank you.

"Waiting for initial communication packet" error 0 in MySQL

Posted: 30 Jun 2013 06:45 AM PDT

I am using MySQL server 5.1.51 on a Linux machine.

The application was working fine for the last 32 days, but since yesterday a number of clients are not able to connect to the database from our application. They are getting the following error:

-2147467259 [MySQL] [ODBC 5.1 Driver] Lost Connection to MySQL server          at 'waiting for initial communication packet',system error: 0  

What can I do to fix this?

"freeing items" taking forever on inserts, updates and deletes

Posted: 30 Jun 2013 02:45 AM PDT

Recently I migrated all my MySQL database tables from MyISAM engine to InnoDB. I'm using the DotCloud MySQL service as my server (http://docs.dotcloud.com/services/mysql/).

After the migration, everything went fine for the first two weeks, but for a couple days I've been experiencing some problems.

At first, all my queries would lock under an update or insert at certain tables. With a SHOW PROCESSLIST; I could see that the locking queries had the state of "freeing items". I started to research and found this:

Why do MySQL threads often show "freeing items" status when the query cache is disabled?

I've changed the innodb_thread_concurrency to 32 as suggested, and that really did help in order to keep the website running well, which means that I'm not experiencing application locks any more.

The problem is that inserts and updates are still slow, some of them taking more than 60 seconds to happen and that is terrible. I'll try to put my database under maintenance and optimize or repair it, but I don't know if that is going to work really well.

Does anyone knows what I could do about it? Thanks!

How do you kick users out of a SQL Server 2008 database?

Posted: 30 Jun 2013 03:55 AM PDT

We need to do a restore, and cannot because other users are connected. We thought we had disconnected every process, but apparently not.

How can we, from Management Studio, kick off everyone else so we can do this backup?

No comments:

Post a Comment

Search This Blog