[how to] How to choose a collation for international database? |
- How to choose a collation for international database?
- How to set up users SQL 2008 express over workgroup? (no domain)
- Updateable Subscriptions: Rows do not match between Publisher and Subscriber
- help in designing reporting database
- Error using sp_trace_create with parameters
- Replicated Database Log File Maintenance
- Help creating this query
- SSIS organization
- How to avoid empty rows in SSIS Excel Destination?
- mySql Event gets expired on closing of the command prompt
- how to change a user password while checking the current password in oracle?
- strange io in mysql server
- MSDTC on SQL Server 2012 with mount point for data disk
- How to disable oracle's MAX_ENABLED_ROLES limit
- Oracle error handling strategy [duplicate]
- Is it possible to use Oracle XMLTABLE and %type column type selector?
- What is the best way to insert a XML input to an oracle table in stored procedure?
- Do roles have an owner in oracle?
- effective mysql table/index design for 35 million rows+ table, with 200+ corresponding columns (double), any combination of which may be queried
- postgres backup / restore: restored database much smaller?
- MySQL Replication using SSL
- Finding out the hosts blocked by mysql server
- Delete word, its meanings, its meaning's example sentences from DB
- INNER JOIN Giving time out on large database
- Do I need an index on this?
- How to modify an update in Oracle so it performs faster?
- Query to find and replace text in all tables and fields of a mysql db
- Export as csv or xsl file from SQL Server 2008 express R2
- MongoDB: queue vs. realtime write/update
- The order of INNER JOIN's
How to choose a collation for international database? Posted: 15 Jul 2013 07:38 PM PDT I'm designing a database which will store data in different languages (using UTF-8), so I think the best way to display the queries results is ordering it according to the user's language during the query itself (because there are more than one correct ways to do that), as follows: Assuming this is the correct way to work with international data, which is the best collation for the database itself? PostgreSQL documentation says:
I think this is the best choice in this case, or am I wrong? (Bonus question: is it too slow to select the collation in the query itself?). |
How to set up users SQL 2008 express over workgroup? (no domain) Posted: 15 Jul 2013 04:28 PM PDT The situation is this: Simple network, one server not a domain controller, running SQL 2008 express (64bit). 8 workstations running windows 7 pro connected to it. Last week after Windows updates finished up on the workstations and server, one workstation was denied log on access to the SQL server. It can see the server and files stored on it, access those file but not log on to the SQL server and it's DB for our CRM software. I set this network up over a year ago, upgraded from a Windows server 2003 running the SQL 2005 Express on a domain (AD on a separate box) to the current stand alone Server 2008 R2 (no domain) and moved the DB over and figured out how to allow access by creating user accounts with passwords on the server. Use the same user names and that password when connecting to the server and everything was fine. So why after windows updates does its thing would one work station be unable to log in again? (Yes I have uninstalled those updates and tried re-connecting, no luck.) Given that I had done this before in the past, what am I missing this time around? I read up the 'suggested questions that may have your answer' while writing this and all of the answers are things I have already done/considered with no positive result. I have created in SQL a new user name with password, created matching windows account for them on the server with same password, reset the workstation's machine and user name to match, went into control panel\credentials and reset those forcing a fresh log into the server to pass those credentials and that all verifies when I log onto the server in Windows Explorer, it asks for the ServerName\UserName + password, I enter that, it accepts it. Then when trying to connect the SQL server, it fails and the message is 'make sure you have the proper credentials etc...' I am certain I am over looking one 'little tiny devil of a detail' so I am hoping someone here can assist in uncovering it. When using SQL Management Studio, I see: ServerName\InstanceName, Authentication is set to Windows Authentication for when I log into it. The server itself has mixed mode SQL\Windows Authentication enabled. Named pipes are setup as well as TCP/IP. (All other workstations can log in and connect) When I create the new user name it is passed through to the instance DB and it has the same parameters as all the other accounts but it is still unable to log onto the server. I feel as though I missing a fundamental 'how to set up user access to SQL using mixed mode authentication' step. I appreciate everyone's time in reading this as well as any and all assistance given. Khan |
Updateable Subscriptions: Rows do not match between Publisher and Subscriber Posted: 15 Jul 2013 01:52 PM PDT I have transactional replication with updatable subscribers set up in SQL Server 2008. It has 3 subscribers and 1 publisher. I had to setup replication again due to some errors related to the database and the application which uses the database. However, now I run into issues when I try updating a section in my application. It does not go through the with update and gives the following error:
The update statement obviously doesn't go through. However, when I try it the second time it works. Replication is working. Everything seems to be replication. Can anyone explain why this error would occur and how I can resolve this issue. I would really appreciate the help!... |
help in designing reporting database Posted: 15 Jul 2013 01:02 PM PDT We are planning to move our detailed report functionality (generating multiple columns output of PDF/XLS with group by, order by etc) from a transactional database to offline reporting database. These reports can be adhoc, too. In light of this, we are considering transferring data from transactional db to multiple flat tables and denormalized tables to avoid joins and improve query time. But we need an expert's opinion on this. Is this a good approach? There has been a debate between having one big flat table vs multiple flat tables. I feel multiple flat tables option is better. Any suggestions? If this is an option for detailed reports, how to achieve summary data reports? Data cubes? I do not know much about Cubes. |
Error using sp_trace_create with parameters Posted: 15 Jul 2013 02:34 PM PDT I'm running the following SQL on SQL Server 2000: and am getting the following error and can't understand why:
If I pass all of the parameters in like so: declare @id int I get the error:
|
Replicated Database Log File Maintenance Posted: 15 Jul 2013 02:08 PM PDT I have a database on the publisher that is involved in replication (publication configured for merge and transaction). Trying to regain control of the log file for this particular database (VLF count, size, etc.). Is there anything I need to do (or be cautious of) with the replication setup before trying to perform any maintenance on the log file? I am not an expert in the area of replication and cannot find anything solid that provides guidance as to what measures should be taken. Edit: This would include working on the distribution database as well, data retention was not configured at all for some reason. |
Posted: 15 Jul 2013 03:36 PM PDT I'd like to integrate the following query:
INTO this query:
Here's a sql fiddle: http://sqlfiddle.com/#!2/bef27/1 Here's my schema: Thanks for the help! QUESTION: This query limits the data to only |
Posted: 15 Jul 2013 04:16 PM PDT I'm familiar with SSMS, but I've never used .net, c# or visual studio (been doing other things: Assembler, c, unix, vi, oracle, etc). I'm trying to figure out basic 2012 SSIS ETL (no BI or data warehouse) and can't find anything that explains how all of these fit together: catalog Say I just want to build a simple test using SSIS, import a file into a table, which of the above are required, and in what order do I create them? Also, do you have any good links to basic SSIS 2012 ETL overviews and/or how to articles. |
How to avoid empty rows in SSIS Excel Destination? Posted: 15 Jul 2013 06:09 PM PDT Does anyone have a way to avoid empty rows when using SSIS to export to Excel. Here's a simple example of one data flow task: OLE DB Source: Data Conversion (to handle the annoying UNICODE / NON-UNICODE deal): The end result is either of the two below depending on value of "FirstRowHasColumnName" in the Excel Connection Manager. Note, the blank rows. |
mySql Event gets expired on closing of the command prompt Posted: 15 Jul 2013 07:20 PM PDT As I am quite new to server side, I tried to create a event using
This event is creating updsting the field till the time the mysql cmdprompt is open. When I am closing the command prompt the event gets dropped/expired, where as my event scheduler is running and waiting fr next activation. Please tell me how to avoid the dropping of event. |
how to change a user password while checking the current password in oracle? Posted: 15 Jul 2013 07:25 PM PDT In my application I have a change password form which users enter their current password and the new password. Then I change their password using the following command: My problem is that I don't know how to check whether the current password which the user has been entered in the form is correct or not ( as the passwords are encrypted in oracle) How should I do this? |
Posted: 15 Jul 2013 07:22 PM PDT One of mysql servers becomes slow recently, and I found that there is a high IO in the server: However, when I enter What's possibly going on here? |
MSDTC on SQL Server 2012 with mount point for data disk Posted: 15 Jul 2013 07:22 PM PDT I have installed a new SQL Server 2012 instance on a cluster. The installation went fine but when I've added the MSDTC as resource of the instance, it went wrong. When I just added it, the MSDTC was online but when I tried my first failover, it refused to go online. The error message was "MSDTC failed". My configuration is: a physical drive disk (let say J:) with two mount points "DATA" and "LOG". I read some people who experienced this kind of errors with mount points and SQL Server 2008 but nothing with 2012. I tried to create the "MSDTC" folder on the physical drive (the J: one) but nothing better. Does anyone know something about it or know how to configure the MSDTC and force it to use the J: drive? |
How to disable oracle's MAX_ENABLED_ROLES limit Posted: 15 Jul 2013 02:09 PM PDT How to disable oracle's MAX_ENABLED_ROLES limit or expand the value of limitation. [oracle 10g (win32)] |
Oracle error handling strategy [duplicate] Posted: 15 Jul 2013 07:24 PM PDT This question already has an answer here: I am developing a project using oracle. I am writing functions and stored procedures to handle CRUD statements. My question is in addition to oracle check constraints do I have to check error situations myself or let oracle do the job? For example if I have a column in a table which is unique and I want to insert into this table I have to check the value to be unique or let the oracle unique constraint do the job and produce an error if it is a repeated value? |
Is it possible to use Oracle XMLTABLE and %type column type selector? Posted: 15 Jul 2013 07:23 PM PDT I am creating a stored procedure which gets an xml input and inserts the data to a table using I want to know how can i specify a field type according to a table column type when passing columns in XMLTABLE. (using something like %type). Here is my simplified code: |
What is the best way to insert a XML input to an oracle table in stored procedure? Posted: 15 Jul 2013 07:24 PM PDT I want to write a stored procedure which has a XML input (clob type) and inserts the data to a table. my XML structure is something like this: I have just stock in many options that oracle gave me to handle this. using I want to know which one is better according to the performance and resources that use. the input XML is usually less than 50KB. |
Do roles have an owner in oracle? Posted: 15 Jul 2013 07:26 PM PDT Suppose that I have logged in to oracle database using user X, then user X creates some roles using |
Posted: 15 Jul 2013 03:09 PM PDT I am looking for advice on table/index design for the following situation: i have a large table (stock price history data, InnoDB, 35 million rows and growing) with a compound primary key (assetid (int),date (date)). in addition to the pricing information, i have 200 double values that need to correspond to each record. i initially stored the 200 double columns directly in this table for ease of update and retrieval, and this had been working fine, as the only querying done on this table was by the assetid and date (these are religiously included in any query against this table), and the 200 double columns were only read. My database size was around 45 Gig However, now i have the requirement where i need to be able to query this table by any combination of these 200 columns (named f1,f2,...f200), for example: i have not historically had to deal with this large of an amount of data before, so my first instinct was that indexes were needed on each of these 200 columns, or i would wind up with large table scans, etc. To me this meant that i needed a table for each of the 200 columns with primary key, value, and index the values. So i went with that. i filled up and indexed all 200 tables. I left the main table intact with all 200 columns, as regularly it is queried over assetid and date range and all 200 columns are selected. I figured that leaving those columns in the parent table (unindexed) for read purposes, and then additionally having them indexed in their own tables (for join filtering) would be most performant. I ran explains on the new form of the query Indeed my desired result was achieved, explain shows me that the rows scanned are much smaller for this query. However i wound up with some undesirable side effects. 1) my database went from 45 Gig to 110 Gig. I can no longer keep the db in RAM. (i have 256Gig of RAM on the way however) 2) nightly inserts of new data now need to be done 200 times instead of once 3) maintenance/defrag of the new 200 tables take 200 times longer than just the 1 table. It cannot be completed in a night. 4) queries against the f1, etc tables are not necessarily performant. for example: the above query, while explain shows that it lookgin at < 1000 rows, can take 30+ seconds to complete. I assume this is because the indexes are too large to fit in memory. Since that was alot of bad news, I looked further and found partitioning. I implemented partitions on the main table, partitioned on date every 3 months. Monthly seemed to make sense to me but i have read that once you get over 120 partitions or so, performance suffers. partitioning quarterly will leave me under that for the next 20 years or so. each partition is a bit under 2 Gig. i ran explain partitions and everything seems to be pruning properly, so regardless i feel the partitioning was a good step, at the very least for analyze/optimize/repair purposes. I spent a good deal of time with this article http://ftp.nchu.edu.tw/MySQL/tech-resources/articles/testing-partitions-large-db.html my table currently is partitioned with primary key still on it. The article mentions that primary keys can make a partitioned table slower, but if you have a machine that can handle it, primary keys on the partitioned table will be faster. Knowing i have a big machine on the way (256 G RAM), i left the keys on. so as i see it, here are my options Option 11) remove the extra 200 tables and let the query do table scans to find the f1, f2 etc values. non-unique indexes can actually hurt performance on a properly partitioned table. run an explain before the user runs the query and deny them if the number of rows scanned is over some threshold i define. save myself the pain of the giant database. Heck, it will all be in memory soon anyways. sub-question:does it sound like i have chosen an appropriate partition scheme? Option 2Partition all the 200 tables using the same 3 months scheme. enjoy the smaller row scans and allow the users to run larger queries. now that they are partitioned at least i can manage them 1 partition at a time for maintenance purposes. Heck, it will all be in memory soon anyways. develop efficient way to update them nightly. sub-question:do you see a reason that i may avoid primary key indexes on these f1,f2,f3,f4... tables, knowing that i always have assetid and date when querying? seems counter intuitive to me but i am not used to data sets of this size. that would shrink the database a bunch i assume Option 3Drop the f1,f2,f3 columns in the master table to reclaim that space. do 200 joins if i need to read 200 features, maybe it wont be as slow as it sounds. Option 4You all have a better way to structure this than i have thought of so far. * NOTE: i will soon be adding another 50-100 of these double values to each item, so i need to design knowing that is coming thanks for any and all help Update #1 - 3/24/2103I went with the idea suggested in the comments i got below and created one new table with the following setup: I partitioned the table in 3 month intervals. I blew away the earlier 200 tables so that my database was back down to 45 Gig and started filling up this new table. A day and a half later, it completed, and my database now sits at a chubby 220 Gigs! It does allow the possibility of removing these 200 values from the master table, as i can get them from one join, but that would really only give me back 25 Gigs or so maybe I asked it to create a primary key on assetid, date,feature and an index on value, and after 9 hours of chugging it really hadn't made a dent and seemed to freeze up so i killed that part off. i rebuilt a couple of the partitions but it did not seem to reclaim much/any space. So that solution looks like it probably isn't going to be ideal. Do rows take up significantly more space than columns i wonder, could that be why this solution took up so much more space? I came across this article http://www.chrismoos.com/2010/01/31/mysql-partitioning-tables-with-millions-of-rows it gave me an idea. where he says "At first, I thought about RANGE partitioning by date, and while I am using the date in my queries, it is very common for a query to have a very large date range, and that means it could easily span all partitions." Now i am range partitioning by date as well, but will also be allowing searches by large date range, which will decrease the effectiveness of my partitioning. I will always have a date range when i search, however i will also always have a list of assetids. Perhaps my solution should be to partition by assetid and date, where i identify typically searched assetid ranges (which i can come up with, there are standard lists, S&P 500, russell 2000, etc). this way i would almost never look at the entire data set. Then again, i am primary keyed on assetid and date anyways, so maybe that wouldnt help much. any more thoughts/comments would be appreciated thanks |
postgres backup / restore: restored database much smaller? Posted: 15 Jul 2013 07:21 PM PDT I am worried that my restored database is very different from the original: The original db is many times larger than the restored one. What is going on here? As far as I can tell, the website that test_db serves is still working perfectly well, after a restore, but I need to know what's up before i use a backup in live context. FYI if I run vacuumdb on each database there seems to be no change in database size. [Addendum, added later] In the tradition of RTFM I have gone hunting in the manual for PostgreSQL. Here are some more pieces to the puzzle Gratefully following up on @craig-ringer's advice to look into I will follow @chris-travers advice and map out the number of rows in tables from each version of the database. I think in my case it'd be fair to say that VACUUM FULL will relieve the pressure on disk space, and it'll make original_vs_restored look better but there's still this red flag of excessive bloat. I think autovacuum isn't doing anything and that's a worry! Thanks for the guidance so far, it's fascinating. |
Posted: 15 Jul 2013 08:09 PM PDT I am in the process of replicating my database so i can have a master slave configuration, one of the issues i have is with security i am basically generating my server/client keys and certificates using openssl i also generate my own CA key and certificate to self sign, i understand the issues with self signing certificates on a public website, but do you think this will be as a serious problem when used in replication? |
Finding out the hosts blocked by mysql server Posted: 15 Jul 2013 07:09 PM PDT Can someone tell me how to list the hosts which are blocked by the mysql server due to the reason that they crossed the limit of max_connect_errors. Is there any table in which MySQL server keeps this data. I am using mysql-server-5.1.63 |
Delete word, its meanings, its meaning's example sentences from DB Posted: 15 Jul 2013 04:09 PM PDT I have three tables as below (simplified for demonstration): where, Edit1: I am using SQLite3 as the database. Edit2: I figured the following solution which requires 3 sql queries in order: I'm still looking for the answer to my question: is the whole process possible to be done in one query? |
INNER JOIN Giving time out on large database Posted: 15 Jul 2013 07:26 PM PDT Getting time out on this script
Is there any other way to do this where I don't get time out? I want to update in both tables Table Definitions Table1 uk_data Table2(uk_pc) |
Posted: 15 Jul 2013 07:23 PM PDT I have a sql server db and have a question regarding foreign key indices. Here's a snippet from the db: In programme_affiliates, I have a computed field, this is always set to '2' in this instance. The reason for it is that it maintains integrity between programmes and programmes_affiliates. I also have a link to programme_types to ensure that '2' is always valid as a programme type. I believe that you should always provide an index for a foreign key, but as it is a calculated fixed field, is this necessary in this case? |
How to modify an update in Oracle so it performs faster? Posted: 15 Jul 2013 01:09 PM PDT I have this query: The trouble that I am having is that this query takes a long time to run. I don't know whether it is possible to run this on parallel, or it would be easier to update a cursor in a pipeline function. What would you suggest? This is all the information that I believe it is relevant. This is the execution plan of the internal select: Table data: This is the script of the historical table: This is the other table: The temporary table is the result of FEE_SCHEDULE_HISTORICAL minus FEE_SCHEDULE |
Query to find and replace text in all tables and fields of a mysql db Posted: 15 Jul 2013 05:09 PM PDT I need to run a query to find and replace some text in all tables of a mysql database. I found this query, but it only looks for the text in the tbl_name table and just in the column field. I need it to look in all tables and all fields: (everywhere in the database) |
Export as csv or xsl file from SQL Server 2008 express R2 Posted: 15 Jul 2013 07:25 PM PDT I am basically LAMP developer got some assignment where SQL Server is required. I have installed SQL Server 2008 R2 Express and imported backup file in it. Now, I want to export user table in excel or csv but not able to do that. Can anyone tell me how can I export table? |
MongoDB: queue vs. realtime write/update Posted: 15 Jul 2013 07:23 PM PDT On a large mongoDB I have 2 different use-cases for reading and 2 different types for write/update.
Now I wonder if mongoDB is able to "priorize" different write-tasks. I have one collection which does not require the data to be written and available instantly in "real-time". It's enough to use a slow queque and process these writing tasks without hurry. However, there are also write statements that should be processed asap and be available with a minimum delay (if any). Does mongoDB allow me to differ between high (asap) and low prio (queque) writing/updating? |
Posted: 15 Jul 2013 07:25 PM PDT I would like to know if there are any rules about the order of I had 62x the performance on my query when my In reference of http://sql-4-life.blogspot.com/2009/03/order-of-inner-joins.html |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment