[how to] Query without having to specify table's schema |
- Query without having to specify table's schema
- Record versioning and promotion
- Partial rollback doesn't decrement trancount
- SQL 2008R2 Trial [duplicate]
- Delete SQL Server Logins On Replicated Server
- Where should I look details about 'access denied' errors?
- script to startup oracle database
- BLOB storage in rows
- BULK INSERTS IN ORACLE 11g SCAN Cluster environment
- where is oracle session timezone set?
- Postgresql function to create table
- MySql is not optimizing the query properly
- what's the fastest method to export a table through sql query ? 11g - performance
- SQL agent job step's connection managers
- Optimize UNION query in MYSQL
- How can Innodb ibdata1 file grows by 5X even with innodb_file_per_table set?
- Is there slowdown inserting into an InnoDB table that has no index set?
- Cannot Utilize Maximum CPU and Memory Usage for MySQL
- Loading XML documents to Oracle 11g DB with control file
- Column partially determining accepted values of another column, should be somehow normalized?
- Copying my Oracle 10g Express database to another PC
- Normalization/normal forms - May a field describe an other field?
- MySQL data too long error
- consequences of using "innodb_flush_method = O_DIRECT" without having a battery backed write cache? or on a KVM guest?
- MySQL Dump configuration file
- How can I set a default session sql_mode for a given user?
- TRUNCATE TABLE statement sometimes hangs
- Users cannot view tables in non-default schema in SSMS
- How do I automatically deliver the SQL Server built in reports?
Query without having to specify table's schema Posted: 23 May 2013 08:04 PM PDT I imported a bunch of tables from SQL Server 2000 to my 2008 database. All the imported tables are prefixed with my username eg: In the table properties it lists 'erpadmin' as the db schema. When I write a query I now have to include 'erpadmin.' in front of all the table names which is confusing. Current result: Desired result: |
Record versioning and promotion Posted: 23 May 2013 06:08 PM PDT Let's say we have this hierarchy: Now, this will not reflect real life completely, but.
3.Each city can modify laws. After they modified the law, it is available to the city itself and to the regions within the city. After 3-step approval process, it can become a country level law, and all children will inherit new version of the law. Each city still can modify new version of a law to have changes.
Currently we solve this issue like this: We have table Because of this we have to use Table-valued functions in our application to filter specific records that are relevant to the current location. (Some laws can come from country level, some laws come from city level, some laws come from organization level). This makes database much slower. We cannot use native unique constraints on our fields, because all fields must be unique only within a branch, so we have to use custom check constraints to keep data clean. Also, insertion and editing records is much slower, since we have to promote some records to a country level, and then apply data transformation for other countries. Currently we have around 150 tables, and we use a lot of joins. I'm not sure what's the best way to model complex hierarchical data with versioning. (I'm thinking we need an enchanced version of source control..) |
Partial rollback doesn't decrement trancount Posted: 23 May 2013 06:31 PM PDT Suppose I have an open SQL Server session, and do the following: Ok, so now I make a: My question is maybe obvious: Why partial rollback doesn't decrement @@TRANCOUNT increased by the inner transaction? To commit changes I should do two commit commands. IMHO doesn't seems natural. |
Posted: 23 May 2013 02:47 PM PDT This question already has an answer here: I currently have a small db running on SQL Server 2008R2 Express. I'm considering moving to SQL 2008R2 Small Business or Standard to use all the sever is capable of. I have a Dell T610 - 2xQuad Core Xeon E5540 2.53GHz - 30GB RAM, 4x146GB 15K SAS + 4x1TB Enterprise SATA- Perc 6/i RAID controller. Both sets configured for RAID 10. Running Windows Server 2008 R2 Standard. My question is, if I load the SQL 2008R2 trial version and do not see the improvement I expect, can I easily move back to Express? Just a note - I can not go to SQL 2012, the db app is limited to SQL 2008. |
Delete SQL Server Logins On Replicated Server Posted: 23 May 2013 04:11 PM PDT We have a production database which is replicated on another server at a remote location. The local database users (not logins) are also replicated in this database. However, the sql logins are not replicated from production to the remote server. What is the best way to replicate "logins" to the remote server with passwords and all? Here's my approach and I need to know if I'm going the right direction. 1) Need help with a script that will drop all logins on the replicated server. 2) EXEC [production server instance].[master].[dbo].[sp_help_revlogin] from the replicated server to get an accurate production list of logins. 3) Find a way to actually execute the results from sp_help_revlogin that will script out the logins on the replicated server with hash passwords, sid's etc. Any major problems with my approach? If not, I could use some help actually writing the scripts or outline the exact process for 1-3 (made some attempts but have had problems getting the scripts to work). |
Where should I look details about 'access denied' errors? Posted: 23 May 2013 06:09 PM PDT This question derives from Deploying a new database on a new server, an msbuild ssdt rookie issue. Since I have a Is there a place in the Event Viewer (Windows 7) where I can look for it? |
script to startup oracle database Posted: 23 May 2013 04:04 PM PDT I found out below script that helps me to start up a Oracle database: It would be a great help for me if someone help me understanding this script and how it can setup an Oracle database. |
Posted: 23 May 2013 11:02 AM PDT I have a database that has several BLOB columns scattered across multiple tables. Since I am a developer and am looking at making application code writing easier, I wanted to turn these columns into NUMBER FK columns referencing a table dedicated to BLOBs only (i.e. a PK ID column and a BLOB column). This would allow our ORM library to do lazy initialization of objects much more easily with less code. What are the performance or data organization implications to doing this? The database is quite large in size due to these BLOBs. |
BULK INSERTS IN ORACLE 11g SCAN Cluster environment Posted: 23 May 2013 10:54 AM PDT A very Huge Data from a table to be populated into another table which is taking a very very long time [Using SQL Developer on Oracle 11g SAN Cluster Environment]. because of its slowness many a times experienced Connection Time-out. Thing you must know
Were as in SQL Server Bulk Operations are Suggested. I'm sure there must be some solution and I'm missing those any expert touch may make my work easy. |
where is oracle session timezone set? Posted: 23 May 2013 11:04 AM PDT I have a logging table and a package that inserts into it with this fragment: When this is called from a DBMS_JOB from one package it inserts Greenwich mean time. When it is called from a DBMS_JOB from another package that uses a db_link to another database it correctly inserts the local time. I think this is caused by the client's timezone being used but adding this to the job did not resolve it: Of course the best solution is to move to DBMS_Scheduler but until that is done how can I ensure that the timestamp that is inserted is the local time? |
Postgresql function to create table Posted: 23 May 2013 02:15 PM PDT I want to create a function in order to create a table with a specific structure pasing part of the name of the table as an argument so the name of the table is t_ . Similar to this: Then call it like: Is it possible? |
MySql is not optimizing the query properly Posted: 23 May 2013 08:38 PM PDT I have a table structure as follows : NOTE : I have an Index UNIQUE = When I run this query : MySql Uses the index Unique and the execution time is 0.7 millisecond BUT when I run this query MySql does not use the UNIQUE index and the execution time is 76 millisecond. Mysql : 5.5.27 InnoDB Version : 1.1.8 My Question is Why is mysql behaving in such a way. Can some one please help me with this. |
what's the fastest method to export a table through sql query ? 11g - performance Posted: 23 May 2013 12:07 PM PDT I'm connected to the oracle server remotely. I need to export a table data(the whole table) with millions of records but due the hosting restrictions I can only use SQL (no exp , expdp) . Therefore I'm wondering what's the sql query that with the best performance. My table structure is as following : I was thinking about to select blocks of records (e.g. of 100.000), concatenate them internally and after it finishes to "export them". However this seems to take long too . Based on my estimation it would take about 20 hours. |
SQL agent job step's connection managers Posted: 23 May 2013 12:11 PM PDT Is there any way with T-SQL to list the connection managers that are used in an SQL Agent Job step? |
Posted: 23 May 2013 03:09 PM PDT I have a problem with a UNION query in MySQL. We have 10 millions players on our website and we would like to select players with a multi-criterias system. For exemple, selecting US people, men, > 35 yo. We are using "vertical partionning" : 1 table per criter. For exemple : We would like to do this kind of query : Do you have any idea on how to optimize that ? Thanks ! François ----- More details Explain of the query : SHOW CREATE TABLE |
How can Innodb ibdata1 file grows by 5X even with innodb_file_per_table set? Posted: 23 May 2013 05:09 PM PDT I have innodb_file_per_table set and just today my ibdata1 file jumped from 59M to 323M after I made several changes to an 800M table to reduce it to about 600M. That particular table's .ibd file was reduced but the server's ibdata1 file went crazy. Any ideas? |
Is there slowdown inserting into an InnoDB table that has no index set? Posted: 23 May 2013 01:09 PM PDT I have an old application with lots of InnoDB tables, that have no indexes at all, not even a primary ID or such. Those tables only contain a few thousand rows. Would it be faster to |
Cannot Utilize Maximum CPU and Memory Usage for MySQL Posted: 23 May 2013 09:09 PM PDT Good day. I know this may be a duplicate of other questions however I have applied all the suggestions in many of the threads, but I remain with the same problem. I have a single stored procedure working with max I am sitting with a my.ini looks as follows: Is it the nature of the procedure called which is causing mysql to under utiize the hardware or is it my configuration? I was running I am using the Using |
Loading XML documents to Oracle 11g DB with control file Posted: 23 May 2013 11:09 AM PDT I am using Oracle 11g XML database and trying to load XML documents to this DB with a control file and the I want to use the Oracle function Here is the date entry in XML file: And here is entire code the control file: I believe that I can execute the above control file with the The UPDATE: I successfully registered the schema, which contains definition for the date string, and 100 other schema, with a script. Since this script is very large, I am posting only 2 registration portions of it: The 2nd registration above is the last in the script, and this creates the table STXP, in which I am trying to load about 800 XML files. Each XML file has a root element called stxp. This is the relevant definition of date string: And this is how I am using the above definition: When I make the above element optional (for testing purpose) and remove the date string entry (mentioned near the top of this question) from my XML file, the XML file is loaded successfully to Oracle XML database. When I put this entry back to XML file (because it is required), Oracle rejects it. Because I let Oracle take care of population of STXP table with data from XML files, I am not sure if I can set a trigger to pre-process the date string from the input XML file before saving it in database. i think there is a way to do it in the control file. |
Column partially determining accepted values of another column, should be somehow normalized? Posted: 23 May 2013 08:11 PM PDT I have a table that describes a set of properties (yes it's metadata, in this case there's a valid reason to have it in the database); among other things I tell the type of the data that can be assigned to them (Type) and a default value. The valid values for the default value are thus restricted by the Type column, but the default value is genuinely an attribute of the table, they are not uniquely determined by the Type column. I'm not sure though, should this be somehow normalized or is it right as it is? Edit: here's an approximate description of the current structure of the relevant part of the database, as requested. Don't mind the Values table, that's a separate issue. Just consider that ValueType restricts the set of DefaultValue permitted values. Also, consider this only as an example, I'm interested in the problem in general, the problem being namely columns that limit the set of valid values of another column but that don't determine its exact value, thus as far as I understand don't constitute multivalued dependencies (but, as an aside, it would be useful to have an integrity constraint that enforce the limit - I'm not sure that this issue can be separated from the former). |
Copying my Oracle 10g Express database to another PC Posted: 23 May 2013 12:49 PM PDT I have Oracle 10g Express. How can I make a copy of my database and application? I don't want to make a back up, I want to move my DB and application to another PC. |
Normalization/normal forms - May a field describe an other field? Posted: 23 May 2013 06:32 PM PDT Like this:
I wonder if it violates this rule (because I don't fully understand the rule): Every non-prime attribute of R is non-transitively dependent (i.e. directly dependent) on every superkey of R. |
Posted: 23 May 2013 08:09 PM PDT One of the column in my table was initially created as a varchar(1000). Now there is a need to increase the size of that column to hold around 5000 characters. I used the I created an other dummy table with a coloumn size of varchar(5000) and that works just fine. Engine used is InnoDB and Default Charset is UTF8. |
Posted: 23 May 2013 02:09 PM PDT Mysql 5.5.29 Innodb- 128GB Ram - 32 cores - Raid 10 SSD. Our server which is a dedicated KVM guest on a 'baremetal' is hosting our heavy read-write DB server. Everything is file-per-table. innodb_Buffer_pool is 96GB with 1GBx2 log_file_size with about 20 minutes of writes to fill up those logs at peak time. How bad of a situation would it be if O_DIRECT (currently running on the default) was enabled during a high work load without a battery backed write cache and a total crash were to occur on the OS, parent host or the power was cut? Does a battery backed write cache make a difference if the server is a vm guest of the parent anyway? . |
Posted: 23 May 2013 04:09 PM PDT I modified th I found that if i give the option |
How can I set a default session sql_mode for a given user? Posted: 23 May 2013 12:09 PM PDT I want to have a default session sql_mode that is different from the global sql_mode. I can set it manually, but I'd like to do it automatically. I thought I could add it to Yet, when I connect my session sql_mode, the session still inherits the global sql_mode. I'm using MySQL 5.5.16. |
TRUNCATE TABLE statement sometimes hangs Posted: 23 May 2013 11:09 AM PDT Why does the I am migrating from MySQL to MariaDB. This problem doesn't happen with MySQL, only with MariaDB. The hanging statement is simply: What can cause this to happen, and how could I fix it? Another one observation is if the table have some data, may be one or two rows, then the truncate query works successfully. Else the table have a lot of data, query becomes hang. |
Users cannot view tables in non-default schema in SSMS Posted: 23 May 2013 07:09 PM PDT I'm having an issue setting the I've tried granting permissions to view definitions: That didn't work. I tried setting the table-level permission: That also didn't work. Then I tried just a blanket grant: And that did work; they can now see My goal here is to allow the user to view all tables within a given schema. How do I accomplish that? If I should be able to do this by default, what permissions should I be looking at to find why I can't? |
How do I automatically deliver the SQL Server built in reports? Posted: 23 May 2013 12:50 PM PDT When you right click on a number of items in the object explorer, it gives you the option for a reports menu. Various reports may exist depending on which type of item is selected. Yet, there does not appear to be any built-in functionality for automated delivery of these reports. It is hidden away somewhere? My real question : Is there any way to set some of these up to be automatically delivered through email? The reports I am targeting are the "Job Steps Execution History", and "Top Jobs" reports for SQL Server agent. I am using SQL Server 2005/2008 |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment