Sunday, August 11, 2013

[how to] LEFT vs CHARINDEX for matching the start of a string in T-SQL

[how to] LEFT vs CHARINDEX for matching the start of a string in T-SQL


LEFT vs CHARINDEX for matching the start of a string in T-SQL

Posted: 11 Aug 2013 08:02 PM PDT

I want to match the beginning of a string in a WHERE clause. While I recognise that so often database-performance is dependant on the underlying data structure, is there a best-practice for doing this? Is there one option which always outperforms the rest?

I want to avoid LIKE as I assume it will be less efficient than LEFT or CHARINDEX. As far as I can tell, my choices are below. Other suggestions welcome:

DECLARE @MyField as varchar(10)  SET @MyField = 'HELLOWORLD'  SELECT 1 WHERE @MyField LIKE 'HELLO%'  SELECT 2 WHERE LEFT(@MyField, 5) = 'HELLO'  SELECT 3 WHERE CHARINDEX('HELLO', @MyField) = 1  

Kind Regards,

Jase.

Why doesn't `1` fit into a decimal(4, 4) column?

Posted: 11 Aug 2013 07:04 PM PDT

I have a decimal(4, 4) column in MS SQL Server 2008 R2. If I understand correctly, that means:

  • Precision of 4, ie up to four digits can be stored after the decimal place
  • Scale of 4, ie a a total of four digits can be stored

When I run an update command to set the column to 1 (update myTable set myDecimalColumn=1), I get this error:

Arithmetic overflow error converting numeric to data type numeric.

I don't understand how that is true. 1 has no digits after the decimal and is only one digit long.

Disable multiple statements in MSSQL Server?

Posted: 11 Aug 2013 06:42 PM PDT

I'm running a web app backed by ASPX and MSSQL server, now I want to disable multiple statements execution, e.g select 1,2,3; select 1,2,3

So, can I disable that feature, or do I need to modify the connection string to make it work?

Handling or preventing conflicts in a multi-user system

Posted: 11 Aug 2013 04:20 PM PDT

I have a web application that is accessed by multiple users from different locations worldwide. Let's say I have an "edit" form for a specific document and two (or more) users are editing the document at the same time. I know that both of the users can edit the document and submit it, but user1 will not see that user2 made changes to the document and might submit an older version of it.

I was thinking about adding some kind of a lock, but then, if user1 only opened the document for edit, but never changed anything (and kept it open) user2 will never be able to edit it. Therefore, I was thinking to add a Timeout for the editor, but then user1 might time out before he finished doing his changes to the document (let's say, went out for launch).

The question is, how would one prevent the document from being edited from one user while the other changes it?

Online database for a C# application

Posted: 11 Aug 2013 06:55 PM PDT

I know it might be a wrong place to ask this question, but I couldn't find a better one. Should I disrupt the famous StackExchange balance, I'm truly sorry.

I am writing a C# app for myself. I want to run it on different computers and share a database between those copies of the app. The trouble I've is: which database to use. And how?

What I mean is, I thought I could simply connect to my hosting account's MySQL DB, but it turned out they don't support connections from the outside, so I looked for MySQL database providers and I didn't find anything interesting. Everything is either too expensive, or looks like a hosting account.

Are there any DB-only providers for clients like me? And which database, if not MySQL, should I utilize for such a project?

How important are transaction log VLFs for performance?

Posted: 11 Aug 2013 06:00 PM PDT

How important are VLFs when considering database performance? What would describe an optimal situation for VLFs?

script to automate an online mysql database dump, download and restore to other local db

Posted: 11 Aug 2013 06:21 PM PDT

I wish to find any script/software to automate the following steps;

  1. connect to an online mysql database and create dump/compressed sql.zip
  2. download the created dump/sql.zip
  3. restore the same to other localhost mysql db for offline use

Windows Server Admin/Root access to online server is available.

Local admin/root is available.

Has anyone done this before?

Regards, Webix

Creating groups and members

Posted: 11 Aug 2013 06:13 PM PDT

Is this a good idea to manage members of a group like so:

*Assuming members can belong to one group and no other

*Simplified model

Group:    name    someid    User:    name    ForeignKey(Group)    state #integer 0-no group 1-waiting for approval from a group admin 2-belong to an group  

Something like facebook groups

trigger that modify values of firing quary

Posted: 11 Aug 2013 07:47 AM PDT

I need to write a postgresql trigger that fires BEFORE INSERT ON a specific table. This trigger should retrieve on database some values that are unknown at the moment of creation and execution of insert query add, it to firing query and then execute query with all fields. i was thinking something like:

CREATE FUNCTION myFunction() RETURNS trigger AS $myTrigger$    DECLARE   myVar VARCHAR(20);    BEGIN      SELECT field1 FROM anotherTable WHERE field2 = NEW.code INTO myVar;        NEW.unknownField = myVar;        RETURN NULL;  END;  $myTrigger$ LANGUAGE plpgsql;    CREATE TRIGGER myTrigger  BEFORE INSERT ON myTable  EXECUTE PROCEDURE myFunction();   

could this trigger works? (i'm not at home now, so i can't try if it's ok! )

How to index for a query with independent range conditions?

Posted: 11 Aug 2013 01:21 PM PDT

What is the best way to index a table that I query using independent range conditions?
E.g. WHERE X < ? and Y > ?

Should I add a column or 1-1 table if property is usualy null

Posted: 11 Aug 2013 07:19 AM PDT

There's a new unique property we need to add, which is relevant only to a specific subset (less than 1%). We thought it's better to add a new table which will be 1-1, than to add a new column which will have Nulls 99% of the time.

Is a new table the preffered way?

Create Language plperl - Error: could not load library plperl.dll

Posted: 11 Aug 2013 08:03 PM PDT

When I create language plperl , I get error: ERROR: could not load library "C:/Program Files/PostgreSQL/9.1/lib/plperl.dll": The specified module could not be found.

But in my computer, "plperl.dll" file is exist in "C:/Program Files/PostgreSQL/9.1/lib/..." folder ( I can not post illustrative image, this forum require >= 10 reputations)

And if I select * pg_pltemplate, I get:

-[ RECORD 4 ]-+-------------------------  tmplname      | plperl  tmpltrusted   | t  tmpldbacreate | t  tmplhandler   | plperl_call_handler  tmplinline    | plperl_inline_handler  tmplvalidator | plperl_validator  tmpllibrary   | $libdir/plperl  

Is PostgreSQL appropriate for processing this large but simple dataset?

Posted: 11 Aug 2013 11:53 AM PDT

I have a dataset I'm not sure how to store. The structure is simple: 30 numeric attributes identified by about 15 billion x, y, and t values. We're expecting ~17k t values and maybe 90k x/y combinations, though it could be some combination that gives us 20 million records in the end.

The processing involves retrieving 1-10 columns for each x and y pair and storing various calculated numeric values. Are we nearing/passing the limit of fast response times for Postgres with this many rows?

The processing is all done in-house by one person and shouldn't need to happen more than a couple dozen times as we settle on what summaries we want. So we're not worried about a high number of writes or connections, strict security, making updates to existing records, table relations, losing data because of network connection issues. Basically, we're not concerned about the kinds of things I understand the ACID part of a RDBMS like Postgres brings to the table. But we also don't need to replicate or distribute the data, high availability, change the schema on the fly, or manage an unusual number of writes (say a dozen for each of the 90k x/y pairs)- the kinds of things I understand NoSQL DBs offer.

So I guess the real issue is read-speed out of Postgres for a table of this size and the real question is whether there's a better storage mechanism for what we need to accomplish. Does the answer change if we have 40 billion records? 60?

how to explain indexes

Posted: 11 Aug 2013 12:39 PM PDT

This question is about teaching: When I learned databases, we talked about indexes being like the card catalog in the town library. We had at least an author index, a title index, then the Dewey decimal number was the access path to the book, which is like the whole record in the database.

Books? What, on paper? Index cards in drawers?

The youngsters don't really know what I'm talking about, so how do we best explain it nowadays?

(Feel free to enjoy my lawn, just please recognize the difference between the grass and the astroturf at the prep school, ok?)

Moving away from CmdExec steps in job without losing logging detail (SQL Server 2008)

Posted: 11 Aug 2013 06:40 PM PDT

My department currently relies heavily on CmdExec steps in SQL Server Jobs, called using a SQL Server Login.

For example

osql /e /n /S SERVER01 /U USER /P PA$$$ /d DATABASE_01 /i "\\LOCATION\OF\SQL\SCRIPT\scriptfile.sql" /o "\\LOCATION\OF\SQL\LOG\scriptfile.log"  

For a lot of reasons (not least security and the expectation that this method will become obsolete) I'd like us to move away from this model and replace with a mixture of stored procedures and SSIS packages.

I've proposed this several times and I'm always told that we can only move in this direction if we can recreate the detail of logging possible by the method above, which our department has come to rely on quite heavily. The method above outputs the contents of the script fed in along with any server messages including all rowcounts and error messages, in context.

This does admittedly mean that errors that wouldn't count as errors for SQL Server are far more easily picked up - Unexpectedly low row counts etc. You could put in as much error handling as you liked and never get something as easy to follow as the logging from this method, so I can see this side of the argument.

To sell my boss on a move away from cmdexec steps I'd like to find a method of recreating something like the output of the logging that cmdexec currently gives us - perhaps to SQL tables rather than files (although either would be fine) - in SSIS packages and stored procedures.

I'm familiar with using raiserror for error handling - I use this extensively in stored procedures - but the task here is not just to catch SQL errors, but to catch all that cmdexec logging catches and ideally in context of the script being executed... Short of writing a stored procedure and executing it after every step in every script I can't see a way of doing this.

Also I'm keen to hear about best practice approaches to logging for people who don't use cmdexec steps. I've learnt SQL in this department and this is the only way I've ever known of doing things.

Increasingly for my own development I try to write stored procs with error handling. And if I come across issues that don't throw errors, I do my troubleshooting by rerunning the code interractively on a dev server so I can monitor the step by step information. This is less convenient than the full logging we get from cmdexec but more secure.

Do other people do more than this? Is there a better way?

Order table Database design

Posted: 11 Aug 2013 06:59 AM PDT

I'm in need of some help with the database design of order table of a shopping cart for online food ordering system. I have a design below. Tables about which I'm Concerned are order_details and order. Basically what is going to happen is user selects foods from restaurant menus and will place order, User details are stored in the user table (which is not shown in the image).

This is my first attempt, I was never good at Database. So please take it easy if you fill that the question is way too low for this site.

And my questions are:

  • Are the tables good enough?
  • Do I need to add any more fields?
  • And one more question I need to store multiple addresses of users(user can have multiple shipping address) how can I handle this?

    http://i.stack.imgur.com/HCYBi.jpg

Note The item_price table in the image is not i use. Please ignore that table while answering. Thanks

InnoDB Failure of some kind

Posted: 11 Aug 2013 01:59 PM PDT

I have MySQL 5.5 installed. I tried to install Joolma but it failed. I went into their sql and replace EGNINE=InnoDB with MyISAM and the install worked.

InnoDB is listed under SHOW ENGINES;

Any idea what the cause is or how to fix this so other InnoDB sites can be used?

I had these errors:

MySQL ERROR No: 1030  MysqL Error: Got error -1 from storage engine  

MySQL is running but not working

Posted: 11 Aug 2013 03:59 PM PDT

In an attempt to tune MySQL to make it work with a recent installation of Drupal I had to modify the MySQL settings on my server. After modifying the configuration file for MySQL (/etc/my.cnf) MySQL stopped working. After some attempts I make it start again but now all my php/MySQL webistes are not being able to connect to their DBs.

Here is why is so confusing:

  • If I check a phpinfo.php on any given site, the MySQL info is there
  • phpmyadmin runs just fine
  • I can run mysql from SSH using root but I see that mysql activity is reduced, look:

[root@server mysql]# mysqladmin processlist

+-----+-----------+-----------+-----------+----------------+------+--------------------+------------------+  | Id  | User      | Host      | db        | Command        | Time | State              | Info             |  +-----+-----------+-----------+-----------+----------------+------+--------------------+------------------+  | 7   | eximstats | localhost | eximstats | Sleep          | 30   |                    |                  |  | 103 | DELAYED   | localhost | eximstats | Delayed insert | 30   | Waiting for INSERT |                  |  | 104 | DELAYED   | localhost | eximstats | Delayed insert | 149  | Waiting for INSERT |                  |  | 105 | DELAYED   | localhost | eximstats | Delayed insert | 149  | Waiting for INSERT |                  |  | 119 | root      | localhost |           | Query          | 0    |                    | show processlist |  +-----+-----------+-----------+-----------+----------------+------+--------------------+------------------+  

My websites using MySQL almost all say:

Error establishing a database connection  

Another say:

Warning: mysql_connect() [function.mysql-connect]: Access denied for user 'website_USER'@'localhost' (using password: NO)  

This is my current my.cnf:

[mysqld]  #datadir=/var/lib/mysql  socket=/var/lib/mysql/mysql.sock  #pid-file=/var/lib/mysql/mysqld.pid  #skip-bdb  #skip-innodb  #err-log=/var/log/mysql.log  #bind-address = server.escalehost.com  log-bin = /var/lib/mysql/log-bin      #innodb_buffer_pool_size=2M  #innodb_additional_mem_pool_size=500K  #innodb_log_buffer_size=500K  #innodb_thread_concurrency=2  #max_connections=125  #table_cache=2500  #thread_cache_size=100  #thread_stack=250K  #wait_timeout=10  #join_buffer=5M  #myisam_sort_buffer_size=15M  #query_cache_size=15M  #read_rnd_buffer_size=5M  max_allowed_packet = 64M  #open_files_limit=8602    #[client]  #port           = 3306  #socket=/var/lib/mysql/mysql.sock    #[mysql.server]  #user=mysql  #basedir=/var/lib    [mysqld_safe]  #socket=/var/lib/mysql/mysql.sock  #err-log=/var/log/mysqld.log  pid-file=/var/run/mysqld/mysqld.pid  

I commented most of it to return it to its simplest version... How can I make the web side to connect to mysql?

Mistake during Oracle 11g PITR

Posted: 11 Aug 2013 11:59 AM PDT

I tried using set time until.. and mis-typed the date. Can anyone help me understand how to get my backups into a manageable state?

After the accidental recover, most of my backupset disappeared.

I recovered them and used 'catalog recovery area' .. and they're listed in 'list backupset'. But something still isn't right.

When I do a PITR now, I get messages that my dbf files aren't available

and... the 'list backupset' seems to show backupsets. But they are listed differently than the files which weren't included in the 'bad' recovery.

Gists with the error and the list of backupsets are here https://gist.github.com/akinsgre/5561254

"Arithmetic overflow" when initializing SQL Server 2012 replication from backup

Posted: 11 Aug 2013 04:59 AM PDT

I'm initializing SQL Server replication from a backup, by following instructions from here:

http://www.mssqltips.com/sqlservertip/2386/initialize-sql-server-replication-using-a-database-backup/

...but, when I execute

USE MyDatabase  GO  EXEC sp_addsubscription   @publication = MyDatabasePublication,   @subscriber = 'AMAZONA-XXXXXXX',   @destination_db = MyDatabase,  @sync_type = 'initialize with backup',  @backupdevicetype ='disk',  @backupdevicename = 'D:\Temp\MyDatabasepublication.bak'  

I get the following error:

Msg 8115, Level 16, State 2, Procedure sp_MSsetupnosyncsubscriptionwithlsn, Line 237  Arithmetic overflow error converting expression to data type nvarchar.  

Any idea why, or at least where can I find this stored procedure to troubleshoot further?

SQL Server 2012, Standard Edition.

UPDATE: It looks like that the problem is caused by the fact that database was created using SQL Server 2008R2 and then attached here. Anyway, still need a solution for it.

PLSQL : DBMS Jobs - parallelization?

Posted: 11 Aug 2013 03:30 PM PDT

I am looping over a cursor and executing a stored procedure; how do I parallelize this process through dbms.jobs?

for rec in select column from table1  loop  execute stored_procedure( rec.column );  end loop;  

Can't connect to SQL Server Windows 7

Posted: 11 Aug 2013 09:59 AM PDT

TITLE: Connect to Server

Cannot connect to localhost.


ADDITIONAL INFORMATION:

A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 2)

For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=2&LinkId=20476

I keep getting this error when trying to connect to sql server. I also ran a repair and I got this error.

enter image description here

Could this be the problem and do you have any idea how I can repair it. I have looked all over Google and tried quite a few solutions with no luck.

I am Using SQL Server 2008 R2 Developer Edition

Alternative tools to export Oracle database to SQL Server?

Posted: 11 Aug 2013 08:59 AM PDT

I've got an Oracle database that I need to export (schema and data) to SQL Server.

I am trying the Microsoft SQL Server Migration Assistant for Oracle, but it is horribly slow, grossly inefficient and very un-user-friendly, e.g. I was having problems connecting to the SQL Server DB during data migration - but it still spent ~5 minutes preparing all the data before attempting a connection to SQL Server, then when it failed, the 5 minutes of preparatory work were wasted.

Right now, I'm just trying to connect to another Oracle DB using this tool, I left it overnight and came back this morning, and it's still stuck on 19% of "Loading objects..." And this is on a machine with a good 18GB RAM, of which maybe 8.5 GB currently in use. Task Manager shows me that SSMAforOracle.exe is using 0 CPU, 0 PF Delta, and no change whatsoever in memory usage. In other words: frozen stiff. Absolutely intolerable.

Are there any other tools out there that can migrate an Oracle DB to SQL Server a little more efficiently?

MySQL auto increment problem with deleting rows / archive table

Posted: 11 Aug 2013 05:59 AM PDT

A hosted server is running "maintenance" each weekend. I am not privy to the details.

In a database on this server there is a MyISAM table. This table never holds more than 1000 rows and usually much less. It is MyISAM so that the auto increment does not reset (and with so few rows it really doesn't matter). Rows are regluarly deleted from this table and moved to an archive table (1M rows).

The problem is lately the auto increment has "rolled back" slightly after each maintenance.

Is there any easy way to verify the auto increment of the insert table by reading the max id from both the insert and the archive table?

I'd rather not verify before each insert unless that is the only solution.

Here are the basic table layouts:

CREATE TABLE x  (      xid int(10) unsigned NOT NULL AUTO_INCREMENT, //snip      PRIMARY KEY (xid)  ) ENGINE=MyISAM AUTO_INCREMENT=124 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;    CREATE TABLE xhistory  (      xid int(10) unsigned NOT NULL DEFAULT '0', //snip      PRIMARY KEY (xid)  ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;  

Far from perfect workaround: (this was somewhat urgent, I had to manually update over 100 rows)

select xid from xhistory where x=?  

Check if just inserted row in x exists in history. If it does:

select greatest(max(x.xid),max(xhistory.xid)) as newval from x,xhistory  

Find a new id.

INSERT INTO x SELECT * FROM x AS iv WHERE iv.xid=? ON DUPLICATE KEY UPDATE xid=?  

And update our row with this id.

Empty LONGTEXT or NULL?

Posted: 11 Aug 2013 10:59 AM PDT

In my MySQL DB I have one field called html_contents that contains all the html to be shown in a webpage. Obviously the html could be huge, and certaintly bigger than 64KB, therefore I decided to use LONGTEXT rather than VARCHAR.

When the field is not set or left empty would you say it is better (alias more efficient for the DB) to set it to NULL or to empty string?

I read this: When to use NULL and when to use an empty string? but it talks about empty strings in general (probably small strings and not LONGTEXT).

I was wondering if with LONGTEXT is a different story, whether it saves a lot of space or execution time to use NULL instead of leaving empty LONGTEXT around.

Best embed/reference/field strategy for many inserts (mongo)

Posted: 11 Aug 2013 07:59 AM PDT

I'm building a gaming backend with Mongo, and have some issues on how to best design the schema to maximize performance and database size.

My models:

User

Match -ReferenceMany (User) -ReferenceMany (Score)

Score -ReferenceOne (Match) -ReferenceOne (User)

It takes 2 users to start a match. Say we get 10.000 users, and all play one match each day against another user, we get 5000 games a day.

Each match has three turns, which gives 6 scores, so for 5000 games we get 30,000 scores. These scores can be inserted simultaneously (in each game), so I have to make sure one user doesn't overwrite another users score. I believe I've solved that by having the scores in their own collection, and embedding them on the match, like so:

$score = new Score();    $score->setUser($user);  $score->setScore($playerScore);  $score->setGame($game);  $score->setMatch($match);    // Save score.  $dm->persist($score);    // Add score to match.  $match->addScores($score);  

My issue with this approach though is that it takes 6 queries just to insert a score (there are some validating queries before the above code). With 30.000 scores a day, that's a whole lotta queries.

My initial thought was to just add the scores to the match as an array, but what will happen the if two users submit at the exakt same time, updating the same match?

I'd be really interested in hearing some opinions on this. Let me know if I need to clarify anything. I'm using Symfony2 with Doctrine ODM if that helps.

No comments:

Post a Comment

Search This Blog