[how to] LEFT vs CHARINDEX for matching the start of a string in T-SQL |
- LEFT vs CHARINDEX for matching the start of a string in T-SQL
- Why doesn't `1` fit into a decimal(4, 4) column?
- Disable multiple statements in MSSQL Server?
- Handling or preventing conflicts in a multi-user system
- Online database for a C# application
- How important are transaction log VLFs for performance?
- script to automate an online mysql database dump, download and restore to other local db
- Creating groups and members
- trigger that modify values of firing quary
- How to index for a query with independent range conditions?
- Should I add a column or 1-1 table if property is usualy null
- Create Language plperl - Error: could not load library plperl.dll
- Is PostgreSQL appropriate for processing this large but simple dataset?
- how to explain indexes
- Moving away from CmdExec steps in job without losing logging detail (SQL Server 2008)
- Order table Database design
- InnoDB Failure of some kind
- MySQL is running but not working
- Mistake during Oracle 11g PITR
- "Arithmetic overflow" when initializing SQL Server 2012 replication from backup
- PLSQL : DBMS Jobs - parallelization?
- Can't connect to SQL Server Windows 7
- Alternative tools to export Oracle database to SQL Server?
- MySQL auto increment problem with deleting rows / archive table
- Empty LONGTEXT or NULL?
- Best embed/reference/field strategy for many inserts (mongo)
LEFT vs CHARINDEX for matching the start of a string in T-SQL Posted: 11 Aug 2013 08:02 PM PDT I want to match the beginning of a string in a WHERE clause. While I recognise that so often database-performance is dependant on the underlying data structure, is there a best-practice for doing this? Is there one option which always outperforms the rest? I want to avoid LIKE as I assume it will be less efficient than LEFT or CHARINDEX. As far as I can tell, my choices are below. Other suggestions welcome: Kind Regards, Jase. |
Why doesn't `1` fit into a decimal(4, 4) column? Posted: 11 Aug 2013 07:04 PM PDT I have a
When I run an update command to set the column to
I don't understand how that is true. |
Disable multiple statements in MSSQL Server? Posted: 11 Aug 2013 06:42 PM PDT I'm running a web app backed by ASPX and MSSQL server, now I want to disable multiple statements execution, e.g So, can I disable that feature, or do I need to modify the connection string to make it work? |
Handling or preventing conflicts in a multi-user system Posted: 11 Aug 2013 04:20 PM PDT I have a web application that is accessed by multiple users from different locations worldwide. Let's say I have an "edit" form for a specific document and two (or more) users are editing the document at the same time. I know that both of the users can edit the document and submit it, but user1 will not see that user2 made changes to the document and might submit an older version of it. I was thinking about adding some kind of a lock, but then, if user1 only opened the document for edit, but never changed anything (and kept it open) user2 will never be able to edit it. Therefore, I was thinking to add a Timeout for the editor, but then user1 might time out before he finished doing his changes to the document (let's say, went out for launch). The question is, how would one prevent the document from being edited from one user while the other changes it? |
Online database for a C# application Posted: 11 Aug 2013 06:55 PM PDT I know it might be a wrong place to ask this question, but I couldn't find a better one. Should I disrupt the famous StackExchange balance, I'm truly sorry. I am writing a C# app for myself. I want to run it on different computers and share a database between those copies of the app. The trouble I've is: which database to use. And how? What I mean is, I thought I could simply connect to my hosting account's MySQL DB, but it turned out they don't support connections from the outside, so I looked for MySQL database providers and I didn't find anything interesting. Everything is either too expensive, or looks like a hosting account. Are there any DB-only providers for clients like me? And which database, if not MySQL, should I utilize for such a project? |
How important are transaction log VLFs for performance? Posted: 11 Aug 2013 06:00 PM PDT How important are VLFs when considering database performance? What would describe an optimal situation for VLFs? |
script to automate an online mysql database dump, download and restore to other local db Posted: 11 Aug 2013 06:21 PM PDT I wish to find any script/software to automate the following steps;
Windows Server Admin/Root access to online server is available. Local admin/root is available. Has anyone done this before? Regards, Webix |
Posted: 11 Aug 2013 06:13 PM PDT Is this a good idea to manage members of a group like so: *Assuming members can belong to one group and no other *Simplified model Something like facebook groups |
trigger that modify values of firing quary Posted: 11 Aug 2013 07:47 AM PDT I need to write a postgresql trigger that fires BEFORE INSERT ON a specific table. This trigger should retrieve on database some values that are unknown at the moment of creation and execution of insert query add, it to firing query and then execute query with all fields. i was thinking something like: could this trigger works? (i'm not at home now, so i can't try if it's ok! ) |
How to index for a query with independent range conditions? Posted: 11 Aug 2013 01:21 PM PDT What is the best way to index a table that I query using independent range conditions? |
Should I add a column or 1-1 table if property is usualy null Posted: 11 Aug 2013 07:19 AM PDT There's a new unique property we need to add, which is relevant only to a specific subset (less than 1%). We thought it's better to add a new table which will be 1-1, than to add a new column which will have Nulls 99% of the time. Is a new table the preffered way? |
Create Language plperl - Error: could not load library plperl.dll Posted: 11 Aug 2013 08:03 PM PDT When I But in my computer, "plperl.dll" file is exist in "C:/Program Files/PostgreSQL/9.1/lib/..." folder ( I can not post illustrative image, this forum require >= 10 reputations) And if I |
Is PostgreSQL appropriate for processing this large but simple dataset? Posted: 11 Aug 2013 11:53 AM PDT I have a dataset I'm not sure how to store. The structure is simple: 30 numeric attributes identified by about 15 billion x, y, and t values. We're expecting ~17k t values and maybe 90k x/y combinations, though it could be some combination that gives us 20 million records in the end. The processing involves retrieving 1-10 columns for each x and y pair and storing various calculated numeric values. Are we nearing/passing the limit of fast response times for Postgres with this many rows? The processing is all done in-house by one person and shouldn't need to happen more than a couple dozen times as we settle on what summaries we want. So we're not worried about a high number of writes or connections, strict security, making updates to existing records, table relations, losing data because of network connection issues. Basically, we're not concerned about the kinds of things I understand the ACID part of a RDBMS like Postgres brings to the table. But we also don't need to replicate or distribute the data, high availability, change the schema on the fly, or manage an unusual number of writes (say a dozen for each of the 90k x/y pairs)- the kinds of things I understand NoSQL DBs offer. So I guess the real issue is read-speed out of Postgres for a table of this size and the real question is whether there's a better storage mechanism for what we need to accomplish. Does the answer change if we have 40 billion records? 60? |
Posted: 11 Aug 2013 12:39 PM PDT This question is about teaching: When I learned databases, we talked about indexes being like the card catalog in the town library. We had at least an author index, a title index, then the Dewey decimal number was the access path to the book, which is like the whole record in the database. Books? What, on paper? Index cards in drawers? The youngsters don't really know what I'm talking about, so how do we best explain it nowadays? (Feel free to enjoy my lawn, just please recognize the difference between the grass and the astroturf at the prep school, ok?) |
Moving away from CmdExec steps in job without losing logging detail (SQL Server 2008) Posted: 11 Aug 2013 06:40 PM PDT My department currently relies heavily on CmdExec steps in SQL Server Jobs, called using a SQL Server Login. For example For a lot of reasons (not least security and the expectation that this method will become obsolete) I'd like us to move away from this model and replace with a mixture of stored procedures and SSIS packages. I've proposed this several times and I'm always told that we can only move in this direction if we can recreate the detail of logging possible by the method above, which our department has come to rely on quite heavily. The method above outputs the contents of the script fed in along with any server messages including all rowcounts and error messages, in context. This does admittedly mean that errors that wouldn't count as errors for SQL Server are far more easily picked up - Unexpectedly low row counts etc. You could put in as much error handling as you liked and never get something as easy to follow as the logging from this method, so I can see this side of the argument. To sell my boss on a move away from cmdexec steps I'd like to find a method of recreating something like the output of the logging that cmdexec currently gives us - perhaps to SQL tables rather than files (although either would be fine) - in SSIS packages and stored procedures. I'm familiar with using raiserror for error handling - I use this extensively in stored procedures - but the task here is not just to catch SQL errors, but to catch all that cmdexec logging catches and ideally in context of the script being executed... Short of writing a stored procedure and executing it after every step in every script I can't see a way of doing this. Also I'm keen to hear about best practice approaches to logging for people who don't use cmdexec steps. I've learnt SQL in this department and this is the only way I've ever known of doing things. Increasingly for my own development I try to write stored procs with error handling. And if I come across issues that don't throw errors, I do my troubleshooting by rerunning the code interractively on a dev server so I can monitor the step by step information. This is less convenient than the full logging we get from cmdexec but more secure. Do other people do more than this? Is there a better way? |
Posted: 11 Aug 2013 06:59 AM PDT I'm in need of some help with the database design of order table of a shopping cart for online food ordering system. I have a design below. Tables about which I'm Concerned are order_details and order. Basically what is going to happen is user selects foods from restaurant menus and will place order, User details are stored in the user table (which is not shown in the image). This is my first attempt, I was never good at Database. So please take it easy if you fill that the question is way too low for this site. And my questions are:
Note The |
Posted: 11 Aug 2013 01:59 PM PDT I have MySQL 5.5 installed. I tried to install Joolma but it failed. I went into their sql and replace EGNINE=InnoDB with MyISAM and the install worked. InnoDB is listed under SHOW ENGINES; Any idea what the cause is or how to fix this so other InnoDB sites can be used? I had these errors: |
MySQL is running but not working Posted: 11 Aug 2013 03:59 PM PDT In an attempt to tune MySQL to make it work with a recent installation of Drupal I had to modify the MySQL settings on my server. After modifying the configuration file for MySQL (/etc/my.cnf) MySQL stopped working. After some attempts I make it start again but now all my php/MySQL webistes are not being able to connect to their DBs. Here is why is so confusing:
My websites using MySQL almost all say: Another say: This is my current my.cnf: I commented most of it to return it to its simplest version... How can I make the web side to connect to mysql? |
Mistake during Oracle 11g PITR Posted: 11 Aug 2013 11:59 AM PDT I tried using set time until.. and mis-typed the date. Can anyone help me understand how to get my backups into a manageable state? After the accidental recover, most of my backupset disappeared. I recovered them and used 'catalog recovery area' .. and they're listed in 'list backupset'. But something still isn't right. When I do a PITR now, I get messages that my dbf files aren't available and... the 'list backupset' seems to show backupsets. But they are listed differently than the files which weren't included in the 'bad' recovery. Gists with the error and the list of backupsets are here https://gist.github.com/akinsgre/5561254 |
"Arithmetic overflow" when initializing SQL Server 2012 replication from backup Posted: 11 Aug 2013 04:59 AM PDT I'm initializing SQL Server replication from a backup, by following instructions from here: ...but, when I execute I get the following error: Any idea why, or at least where can I find this stored procedure to troubleshoot further? SQL Server 2012, Standard Edition. UPDATE: It looks like that the problem is caused by the fact that database was created using SQL Server 2008R2 and then attached here. Anyway, still need a solution for it. |
PLSQL : DBMS Jobs - parallelization? Posted: 11 Aug 2013 03:30 PM PDT I am looping over a cursor and executing a stored procedure; how do I parallelize this process through dbms.jobs? |
Can't connect to SQL Server Windows 7 Posted: 11 Aug 2013 09:59 AM PDT TITLE: Connect to ServerCannot connect to localhost. ADDITIONAL INFORMATION: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 2) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=2&LinkId=20476 I keep getting this error when trying to connect to sql server. I also ran a repair and I got this error. Could this be the problem and do you have any idea how I can repair it. I have looked all over Google and tried quite a few solutions with no luck. I am Using SQL Server 2008 R2 Developer Edition |
Alternative tools to export Oracle database to SQL Server? Posted: 11 Aug 2013 08:59 AM PDT I've got an Oracle database that I need to export (schema and data) to SQL Server. I am trying the Microsoft SQL Server Migration Assistant for Oracle, but it is horribly slow, grossly inefficient and very un-user-friendly, e.g. I was having problems connecting to the SQL Server DB during data migration - but it still spent ~5 minutes preparing all the data before attempting a connection to SQL Server, then when it failed, the 5 minutes of preparatory work were wasted. Right now, I'm just trying to connect to another Oracle DB using this tool, I left it overnight and came back this morning, and it's still stuck on 19% of "Loading objects..." And this is on a machine with a good 18GB RAM, of which maybe 8.5 GB currently in use. Task Manager shows me that Are there any other tools out there that can migrate an Oracle DB to SQL Server a little more efficiently? |
MySQL auto increment problem with deleting rows / archive table Posted: 11 Aug 2013 05:59 AM PDT A hosted server is running "maintenance" each weekend. I am not privy to the details. In a database on this server there is a MyISAM table. This table never holds more than 1000 rows and usually much less. It is MyISAM so that the auto increment does not reset (and with so few rows it really doesn't matter). Rows are regluarly deleted from this table and moved to an archive table (1M rows). The problem is lately the auto increment has "rolled back" slightly after each maintenance. Is there any easy way to verify the auto increment of the insert table by reading the max id from both the insert and the archive table? I'd rather not verify before each insert unless that is the only solution. Here are the basic table layouts: Far from perfect workaround: (this was somewhat urgent, I had to manually update over 100 rows) Check if just inserted row in x exists in history. If it does: Find a new id. And update our row with this id. |
Posted: 11 Aug 2013 10:59 AM PDT In my MySQL DB I have one field called When the field is not set or left empty would you say it is better (alias more efficient for the DB) to set it to NULL or to empty string? I read this: When to use NULL and when to use an empty string? but it talks about empty strings in general (probably small strings and not I was wondering if with |
Best embed/reference/field strategy for many inserts (mongo) Posted: 11 Aug 2013 07:59 AM PDT I'm building a gaming backend with Mongo, and have some issues on how to best design the schema to maximize performance and database size. My models: User Match -ReferenceMany (User) -ReferenceMany (Score) Score -ReferenceOne (Match) -ReferenceOne (User) It takes 2 users to start a match. Say we get 10.000 users, and all play one match each day against another user, we get 5000 games a day. Each match has three turns, which gives 6 scores, so for 5000 games we get 30,000 scores. These scores can be inserted simultaneously (in each game), so I have to make sure one user doesn't overwrite another users score. I believe I've solved that by having the scores in their own collection, and embedding them on the match, like so: My issue with this approach though is that it takes 6 queries just to insert a score (there are some validating queries before the above code). With 30.000 scores a day, that's a whole lotta queries. My initial thought was to just add the scores to the match as an array, but what will happen the if two users submit at the exakt same time, updating the same match? I'd be really interested in hearing some opinions on this. Let me know if I need to clarify anything. I'm using Symfony2 with Doctrine ODM if that helps. |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment