Thursday, May 23, 2013

[how to] Query without having to specify table's schema

[how to] Query without having to specify table's schema


Query without having to specify table's schema

Posted: 23 May 2013 08:04 PM PDT

I imported a bunch of tables from SQL Server 2000 to my 2008 database. All the imported tables are prefixed with my username eg: erpadmin.tablename.

In the table properties it lists 'erpadmin' as the db schema. When I write a query I now have to include 'erpadmin.' in front of all the table names which is confusing.

Current result:

select *  from erpadmin.tablename  

Desired result:

select *  from  tablename  

Record versioning and promotion

Posted: 23 May 2013 06:08 PM PDT

Let's say we have this hierarchy:

   -World            --USA          ---WA          ----Seattle          -----Downtown          ------1st Ave          -------945 1st ave          ------3rd Ave          -----South          ----Bellevue            ---TX            ---MI            --Canada             ---BC      

Now, this will not reflect real life completely, but.

  1. At location World we have some global settings. Let's say: list of Races, list of Languages, etc. All locations underneath it can access this data.

  2. Country level holds data about laws, currency, etc.

3.Each city can modify laws. After they modified the law, it is available to the city itself and to the regions within the city. After 3-step approval process, it can become a country level law, and all children will inherit new version of the law. Each city still can modify new version of a law to have changes.

  1. Every City has sandboxes. This means that different organizations within city can test out editing different versions of laws without affecting City master revision.

Currently we solve this issue like this:

We have table Location with HierarchyId
Then we have table Law that has columns StateType, Location_Id, etc.
StateType can be: Master (Available to all within country), Edited (Edited for a specific location), Approved (Approved at a specific location).

Because of this we have to use Table-valued functions in our application to filter specific records that are relevant to the current location. (Some laws can come from country level, some laws come from city level, some laws come from organization level). This makes database much slower.

We cannot use native unique constraints on our fields, because all fields must be unique only within a branch, so we have to use custom check constraints to keep data clean.

Also, insertion and editing records is much slower, since we have to promote some records to a country level, and then apply data transformation for other countries.

Currently we have around 150 tables, and we use a lot of joins.

I'm not sure what's the best way to model complex hierarchical data with versioning. (I'm thinking we need an enchanced version of source control..)

Partial rollback doesn't decrement trancount

Posted: 23 May 2013 06:31 PM PDT

Suppose I have an open SQL Server session, and do the following:

begin tran     insert into People (Id) values (1)    select @@TRANCOUNT -- Prints 1    save transaction tt    begin tran    select @@TRANCOUNT -- Prints 2    insert into People (Id) values (2)  

Ok, so now I make a:

rollback tran tt    select @@TRANCOUNT -- Prints 2!  

My question is maybe obvious:

Why partial rollback doesn't decrement @@TRANCOUNT increased by the inner transaction?

To commit changes I should do two commit commands. IMHO doesn't seems natural.

SQL 2008R2 Trial [duplicate]

Posted: 23 May 2013 02:47 PM PDT

This question already has an answer here:

I currently have a small db running on SQL Server 2008R2 Express. I'm considering moving to SQL 2008R2 Small Business or Standard to use all the sever is capable of. I have a Dell T610 - 2xQuad Core Xeon E5540 2.53GHz - 30GB RAM, 4x146GB 15K SAS + 4x1TB Enterprise SATA- Perc 6/i RAID controller. Both sets configured for RAID 10. Running Windows Server 2008 R2 Standard.

My question is, if I load the SQL 2008R2 trial version and do not see the improvement I expect, can I easily move back to Express?

Just a note - I can not go to SQL 2012, the db app is limited to SQL 2008.

Delete SQL Server Logins On Replicated Server

Posted: 23 May 2013 04:11 PM PDT

We have a production database which is replicated on another server at a remote location. The local database users (not logins) are also replicated in this database. However, the sql logins are not replicated from production to the remote server.

What is the best way to replicate "logins" to the remote server with passwords and all?

Here's my approach and I need to know if I'm going the right direction.

1) Need help with a script that will drop all logins on the replicated server.

2) EXEC [production server instance].[master].[dbo].[sp_help_revlogin] from the replicated server to get an accurate production list of logins.

3) Find a way to actually execute the results from sp_help_revlogin that will script out the logins on the replicated server with hash passwords, sid's etc.

Any major problems with my approach? If not, I could use some help actually writing the scripts or outline the exact process for 1-3 (made some attempts but have had problems getting the scripts to work).

Where should I look details about 'access denied' errors?

Posted: 23 May 2013 06:09 PM PDT

This question derives from Deploying a new database on a new server, an msbuild ssdt rookie issue.

Since I have a permission denied error, I expect to find which user is failing to access so I can grant it.

Is there a place in the Event Viewer (Windows 7) where I can look for it?

script to startup oracle database

Posted: 23 May 2013 04:04 PM PDT

I found out below script that helps me to start up a Oracle database:

set serveroutput on;  WHENEVER SQLERROR EXIT SQL.SQLCODE  STARTUP NOMOUNT  CREATE CONTROLFILE REUSE DATABASE "MY_DB" NORESETLOGS  NOARCHIVELOG      MAXLOGFILES 16      MAXLOGMEMBERS 3      MAXDATAFILES 100      MAXINSTANCES 8      MAXLOGHISTORY 292  LOGFILE    GROUP 1 '&1/fast_recovery_area/MY_DB/onlinelog/<file1>.log'  SIZE 50M BLOCKSIZE 512,    GROUP 2 '&1/fast_recovery_area/MY_DB/onlinelog/<file2>.log'  SIZE 50M BLOCKSIZE 512  DATAFILE    '&1/oradata/MY_DB/system.dbf',    '&1/oradata/MY_DB/sysaux.dbf',    '&1/oradata/MY_DB/undotbs.dbf',    '&1/oradata/MY_DB/users.dbf',  CHARACTER SET AL32UTF8;  alter database open;  exit;  

It would be a great help for me if someone help me understanding this script and how it can setup an Oracle database.

BLOB storage in rows

Posted: 23 May 2013 11:02 AM PDT

I have a database that has several BLOB columns scattered across multiple tables. Since I am a developer and am looking at making application code writing easier, I wanted to turn these columns into NUMBER FK columns referencing a table dedicated to BLOBs only (i.e. a PK ID column and a BLOB column). This would allow our ORM library to do lazy initialization of objects much more easily with less code.

What are the performance or data organization implications to doing this? The database is quite large in size due to these BLOBs.

BULK INSERTS IN ORACLE 11g SCAN Cluster environment

Posted: 23 May 2013 10:54 AM PDT

A very Huge Data from a table to be populated into another table which is taking a very very long time [Using SQL Developer on Oracle 11g SAN Cluster Environment]. because of its slowness many a times experienced Connection Time-out.

Thing you must know

  1. Data is being fetched from a non Transactional Table [Oracle 11g Cluster APP SERVER].
  2. Data being populated into is a Transactional Table [Oracle 11g SAN Cluster].
  3. Data is around 10 - 15 million records.
  4. Data is being fetched using a Database Link.

Were as in SQL Server Bulk Operations are Suggested.

I'm sure there must be some solution and I'm missing those any expert touch may make my work easy.

where is oracle session timezone set?

Posted: 23 May 2013 11:04 AM PDT

I have a logging table

CREATE TABLE EMAIL_LOGGING  (    ID               NUMBER(9)                    NOT NULL,    SEND_DATE        TIMESTAMP(6)                 DEFAULT LOCALTIMESTAMP        NOT NULL,    SEND_TO_USER_ID  NUMBER(9)                    NOT NULL);  

and a package that inserts into it with this fragment:

  INSERT INTO EMAIL_LOGGING ("ID",SEND_DATE, SEND_TO_USER_ID)         VALUES (NULL, LOCALTIMESTAMP,send_to_user_in);  

When this is called from a DBMS_JOB from one package it inserts Greenwich mean time. When it is called from a DBMS_JOB from another package that uses a db_link to another database it correctly inserts the local time.

I think this is caused by the client's timezone being used but adding this to the job did not resolve it:

execute immediate 'alter session set time_zone=local';  

Of course the best solution is to move to DBMS_Scheduler but until that is done how can I ensure that the timestamp that is inserted is the local time?

select * from v$version where banner like 'Oracle%';  Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production  SELECT DBTIMEZONE FROM DUAL;  +00:00  SELECT SESSIONTIMEZONE from dual;  -04:00  

Postgresql function to create table

Posted: 23 May 2013 02:15 PM PDT

I want to create a function in order to create a table with a specific structure pasing part of the name of the table as an argument so the name of the table is t_ . Similar to this:

CREATE OR REPLACE FUNCTION create_table_type1(t_name VARCHAR(30)) RETURNS VOID AS $$  BEGIN      EXECUTE "CREATE TABLE IF NOT EXISTS t_"|| t_name ||"      (      id SERIAL,      customerid INT,      daterecorded DATE,              value DOUBLE PRECISION,      PRIMARY KEY (id)      )"  END  $$ LANGUAGE plpgsql  

Then call it like:

SELECT create_table_type1('one');  

Is it possible?

MySql is not optimizing the query properly

Posted: 23 May 2013 08:38 PM PDT

I have a table structure as follows :

CREATE TABLE `sale_product_inventories` (    `id` int(11) NOT NULL AUTO_INCREMENT,    `sale_id` int(11) NOT NULL,    `product_id` int(11) NOT NULL,    `size` varchar(50) COLLATE utf8_unicode_ci DEFAULT NULL,    `tier_number` int(11) NOT NULL DEFAULT '1',    `sale_product_pool_id` int(11) DEFAULT NULL,    `inventory` int(11) NOT NULL,    `fulfillment_center_id` int(11) DEFAULT NULL,    `total_product_units_sold` int(11) DEFAULT '0',    `in_cart_units` int(11) DEFAULT '0',    `arrival_from_date` date DEFAULT NULL,    `arrival_to_date` date DEFAULT NULL,    `arrival_custom_from` tinyint(4) DEFAULT NULL,    `arrival_custom_to` tinyint(4) DEFAULT NULL,    `arrival_custom_unit` enum('days','weeks','months') COLLATE utf8_unicode_ci DEFAULT 'days',    `size_display_order` tinyint(4) NOT NULL DEFAULT '0',    `last_updated_by` int(11) DEFAULT '0',    `created_by` int(11) DEFAULT '0',    `status` enum('active','inactive') COLLATE utf8_unicode_ci NOT NULL DEFAULT 'active',    `created_at` datetime DEFAULT NULL,    `updated_at` datetime DEFAULT NULL,    PRIMARY KEY (`id`),    UNIQUE KEY `UNIQUE` (`sale_id`,`product_id`,`tier_number`,`size`,`sale_product_pool_id`)  ) ENGINE=InnoDB AUTO_INCREMENT=92872 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;  

NOTE : I have an Index UNIQUE = sale_id,product_id,tier_number,size,sale_product_pool_id

When I run this query :

select * from sale_product_inventories   where   sale_id in (502,504)  and   (sale_id, product_id) in ((502,2),(502,1), (502,3),(502,4) ,(504,2) ,(504,3) )  

Query Plan for the query above MySql Uses the index Unique and the execution time is 0.7 millisecond

BUT

when I run this query

select * from sale_product_inventories   where   (sale_id, product_id) in ((502,2),(502,1), (502,3),(502,4) ,(504,2) ,(504,3) )  

Query Plan for the second query

MySql does not use the UNIQUE index and the execution time is 76 millisecond.

Mysql : 5.5.27 InnoDB Version : 1.1.8

My Question is Why is mysql behaving in such a way. Can some one please help me with this.

what's the fastest method to export a table through sql query ? 11g - performance

Posted: 23 May 2013 12:07 PM PDT

I'm connected to the oracle server remotely. I need to export a table data(the whole table) with millions of records but due the hosting restrictions I can only use SQL (no exp , expdp) . Therefore I'm wondering what's the sql query that with the best performance. My table structure is as following :

  CREATE TABLE "AUTO"."AUTO"      (          "AUTOJ" NUMBER(16,0),       "AUTOK" NUMBER(5,0),       "AUTOL" NUMBER(38,0),       "AUTOM" NUMBER(4,0),       "AUTON" NUMBER(4,0),   "AUTOZ" NUMBER(8,0),       "AUTOAA" NUMBER(32,0),       "AUTOBB" NUMBER(38,0),       "AUTOBC" NUMBER(38,0),       "AUTOBD" NUMBER(38,0),       "AUTOBW" DATE,         "V" NUMBER(32,0),       "AUTOT" VARCHAR2(128),       "AUTOU" NUMBER(10,0),   "MK" NUMBER(38,0),       "ID" NUMBER(38,0) NOT NULL ENABLE,       "AUTOA" VARCHAR2(64) NOT NULL ENABLE,       "AUTOB" NUMBER(38,0) NOT NULL ENABLE,       "AUTOC" VARCHAR2(68),       "AUTOD" VARCHAR2(64),       "AUTOE" DATE NOT NULL ENABLE,       "AUTOF" DATE,       "AUTOG" NUMBER(16,0),       "AUTOH" VARCHAR2(128) NOT NULL ENABLE,       "AUTOI" NUMBER(10,0),       "AUTOW" NUMBER(10,0),       "AUTOX" NUMBER(15,2),       "AUTOY" NUMBER(15,2),         "AUTOMK" DATE,       "AUTOPO" VARCHAR2(128),       "AUTOCV" NUMBER(10,0),       "AUTOON" NUMBER(10,0),       "AUTOXA" NUMBER(1,0),  "AUTOBY" NUMBER(8,0),       "AUTOCC" NUMBER(38,0),       "AUTODA" DATE,       "AUTOO" NUMBER(3,0),       "AUTOP" VARCHAR2(1),       "AUTOR" NUMBER(3,0),       "AUTOS" NUMBER(3,0),        "AUTOIWN" NUMBER(10,0)     ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255    NOCOMPRESS LOGGING    STORAGE(INITIAL 104857600 NEXT 104857600 MINEXTENTS 1 MAXEXTENTS 2147483645    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1    BUFFER_POOL DEFAULT)    TABLESPACE "AUTODATA01"   

I was thinking about to select blocks of records (e.g. of 100.000), concatenate them internally and after it finishes to "export them". However this seems to take long too . Based on my estimation it would take about 20 hours.

  OS : Solaris     DB : Oracle 11g    

SQL agent job step's connection managers

Posted: 23 May 2013 12:11 PM PDT

Is there any way with T-SQL to list the connection managers that are used in an SQL Agent Job step?

Optimize UNION query in MYSQL

Posted: 23 May 2013 03:09 PM PDT

I have a problem with a UNION query in MySQL. We have 10 millions players on our website and we would like to select players with a multi-criterias system. For exemple, selecting US people, men, > 35 yo.

We are using "vertical partionning" : 1 table per criter. For exemple :

* user_country  - id_user  - id_country  

We would like to do this kind of query :

SELECT id_inscri FROM userdata_langue   WHERE id_langue='43'     UNION   SELECT id_inscri FROM userdata_sexe   WHERE sexe='2'     UNION   SELECT id_inscri FROM userdata_nb_jour   WHERE nb_jour>='31'     UNION   SELECT id_inscri FROM userdata_last   WHERE last<='2013-04-07'     AND last>='2013-04-03' ;  

Do you have any idea on how to optimize that ?

Thanks !

François

----- More details Explain of the query :

id  select_type table   type    possible_keys   key key_len ref rows    Extra  1   PRIMARY userdata_langue ref id_langue   id_langue   1   const       398846  Using index  2   UNION   userdata_sexe   ref sexe    sexe    1   const   1667137 Using index  3   UNION   userdata_nb_jour    range   nb_jour nb_jour 2   NULL    5830    Using where; Using index  4   UNION   userdata_last   range   last    last    3   NULL    371614  Using where; Using index  NULL    UNION RESULT    <union1,2,3,4>  ALL NULL    NULL    NULL    NULL    NULL  

SHOW CREATE TABLE

Table   Create Table  userdata_langue CREATE TABLE `userdata_langue` (   `id_inscri` bigint(20) NOT NULL,   `id_langue` tinyint(3) unsigned NOT NULL,   PRIMARY KEY (`id_inscri`),   KEY `id_langue` (`id_langue`)  ) ENGINE=InnoDB DEFAULT CHARSET=latin1     

How can Innodb ibdata1 file grows by 5X even with innodb_file_per_table set?

Posted: 23 May 2013 05:09 PM PDT

I have innodb_file_per_table set and just today my ibdata1 file jumped from 59M to 323M after I made several changes to an 800M table to reduce it to about 600M. That particular table's .ibd file was reduced but the server's ibdata1 file went crazy. Any ideas?

Is there slowdown inserting into an InnoDB table that has no index set?

Posted: 23 May 2013 01:09 PM PDT

I have an old application with lots of InnoDB tables, that have no indexes at all, not even a primary ID or such.

Those tables only contain a few thousand rows.

Would it be faster to INSERT data into these tables if I would set a primary index (that I don't need otherwise)?

Cannot Utilize Maximum CPU and Memory Usage for MySQL

Posted: 23 May 2013 09:09 PM PDT

Good day.

I know this may be a duplicate of other questions however I have applied all the suggestions in many of the threads, but I remain with the same problem.

I have a single stored procedure working with max 3 tables. when I run the procedure, only 30% of my CPU is used and about 25% of RAM.

I am sitting with a CPU with 4 cores and 16GB RAM.

my.ini looks as follows:

[client]  port        = 3306  socket      = /tmp/mysql.sock    [mysqld]  port        = 3306  socket      = /tmp/mysql.sock  skip-external-locking  key_buffer_size = 512M  max_allowed_packet = 32M  table_open_cache = 512  sort_buffer_size = 2M  read_buffer_size = 2M  read_rnd_buffer_size = 8M  myisam_sort_buffer_size = 128M  thread_cache_size = 16  query_cache_size= 32M  thread_concurrency = 0    log-bin=mysql-bin    binlog_format=mixed    server-id   = 1    innodb_buffer_pool_size = 12G  innodb_log_buffer_size = 256M  innodb_flush_log_at_trx_commit = 2  innodb_read_io_threads = 64  innodb_write_io_threads = 64    [mysqldump]  quick  max_allowed_packet = 16M    [mysql]  no-auto-rehash    [myisamchk]  key_buffer_size = 128M  sort_buffer_size = 128M  read_buffer = 2M  write_buffer = 2M    [mysqlhotcopy]  interactive-timeout  

Is it the nature of the procedure called which is causing mysql to under utiize the hardware or is it my configuration?

I was running XAMPP but then realised it was 32-bit so I switched to the 64-bit version of WAMP. I use a 32-bit MySQLWorkbench to run queries.

I am using the InnoDB engine.

Using MySQL Ver 14.14 Distrib 5.5.24 Win64 (x86).

Loading XML documents to Oracle 11g DB with control file

Posted: 23 May 2013 11:09 AM PDT

I am using Oracle 11g XML database and trying to load XML documents to this DB with a control file and the sqlldr utility. All these XML files have an element that contains a date string with time stamp (and the letter T in the middle of it). Oracle rejects this date string because of T in it and thus the XML file is not loaded to the DB.

I want to use the Oracle function TO_TIMESTAMP_TZ on the date string during the data load, but I do not know how to do it. That's where I need help. If there is any other way to import the XML (with date string with timestamp), I will try that also.

Here is the date entry in XML file:

<ns3:EntryDateTime cls="U">2013-04-20T21:02:52.468-04:00</ns3:EntryDateTime>  

And here is entire code the control file:

load data infile 'filelist.dat'     append into table STXP xmltype(XMLDATA)    ( filename filler char(120), XMLDATA lobfile(filename) terminated by eof )  

I believe that I can execute the above control file with the sqlldr utility on SQL*Plus command line also, but not sure about this option. If this is possible, I guess I can ALTER SESSION (to somehow format date string) on command line before executing the control file.

The filelist.dat mentioned above contains entries for input XML file, with one line listing one XML file. The above date entry is required in each XML file. Each XML file has about 50 different elements, some required and some optional. I would greatly appreciate your help.

UPDATE: I successfully registered the schema, which contains definition for the date string, and 100 other schema, with a script. Since this script is very large, I am posting only 2 registration portions of it:

DECLARE  SCHEMAURL VARCHAR2( 100 );  SCHEMADOC VARCHAR2( 100 );  BEGIN  SCHEMAURL := 'http://www.some.org/stxp/DataTypes.xsd';  SCHEMADOC := 'DataTypes.xsd';  DBMS_XMLSCHEMA.registerSchema(       SCHEMAURL,       BFILENAME( 'XSD_DIR', SCHEMADOC ),      LOCAL => TRUE, -- local      GENTYPES => TRUE,  -- generate object types      GENBEAN => FALSE, -- no java beans      GENTABLES => TRUE,  -- generate object tables      OWNER => USER );      SCHEMAURL := 'http://www.some.org/stxp/STXP.xsd';      SCHEMADOC := 'STXP.xsd';      DBMS_XMLSCHEMA.registerSchema(       SCHEMAURL,       BFILENAME( 'XSD_DIR', SCHEMADOC ),      LOCAL => TRUE, -- local      GENTYPES => TRUE,  -- generate object types      GENBEAN => FALSE, -- no java beans      GENTABLES => TRUE,  -- generate object tables      OWNER => USER );    END;    /  

The 2nd registration above is the last in the script, and this creates the table STXP, in which I am trying to load about 800 XML files. Each XML file has a root element called stxp.

This is the relevant definition of date string:

 <xsd:simpleType name="DT" xdb:SQLType="TIMESTAMP WITH TIME ZONE">      <xsd:restriction base="xsd:dateTime"/>  </xsd:simpleType>  

And this is how I am using the above definition:

<element name="EntryDateTime" type="oth:DT"/>  

When I make the above element optional (for testing purpose) and remove the date string entry (mentioned near the top of this question) from my XML file, the XML file is loaded successfully to Oracle XML database. When I put this entry back to XML file (because it is required), Oracle rejects it.

Because I let Oracle take care of population of STXP table with data from XML files, I am not sure if I can set a trigger to pre-process the date string from the input XML file before saving it in database. i think there is a way to do it in the control file.

Column partially determining accepted values of another column, should be somehow normalized?

Posted: 23 May 2013 08:11 PM PDT

I have a table that describes a set of properties (yes it's metadata, in this case there's a valid reason to have it in the database); among other things I tell the type of the data that can be assigned to them (Type) and a default value. The valid values for the default value are thus restricted by the Type column, but the default value is genuinely an attribute of the table, they are not uniquely determined by the Type column.

I'm not sure though, should this be somehow normalized or is it right as it is?

Edit: here's an approximate description of the current structure of the relevant part of the database, as requested. Don't mind the Values table, that's a separate issue. Just consider that ValueType restricts the set of DefaultValue permitted values.

Also, consider this only as an example, I'm interested in the problem in general, the problem being namely columns that limit the set of valid values of another column but that don't determine its exact value, thus as far as I understand don't constitute multivalued dependencies (but, as an aside, it would be useful to have an integrity constraint that enforce the limit - I'm not sure that this issue can be separated from the former).

Properties (  ID int PRIMARY KEY  Name varchar(100)  ValueType int REFERENCES ValueTypes(ID)  DefaultValue int REFERENCES Values(ID)  )    ValueTypes (  ID int PRIMARY KEY  Name varchar(100)  ...  )    Values (  ID int PRIMARY KEY  ...  )  

Copying my Oracle 10g Express database to another PC

Posted: 23 May 2013 12:49 PM PDT

I have Oracle 10g Express. How can I make a copy of my database and application?

I don't want to make a back up, I want to move my DB and application to another PC.

Normalization/normal forms - May a field describe an other field?

Posted: 23 May 2013 06:32 PM PDT

Like this:

CREATE TABLE persons(      id serial8 NOT NULL PRIMARY KEY,      name varchar,      -- A lot of other fields      date_of_birth timestamp with time zone,      date_of_birth_precision varchar(16),      CHECK (date_of_birth_precision IN ('Years','Months','Days','Hours','Minutes'))  );  

date_of_birth_precision describes the precision of date_of_birth.

I wonder if it violates this rule (because I don't fully understand the rule):

Every non-prime attribute of R is non-transitively dependent (i.e. directly dependent) on every superkey of R.

MySQL data too long error

Posted: 23 May 2013 08:09 PM PDT

One of the column in my table was initially created as a varchar(1000). Now there is a need to increase the size of that column to hold around 5000 characters. I used the alter statement to increase the size of that column. DESC table_name as well as SHOW CREATE TABLE table_name clearly indicates the new size of that column to be 5000 characters. However, interestingly when I try to insert any data exceeding 1000 characters ERROR 1406 (22001) : Data too long for column error shows up. Out of desperation I changed the datatype to text, and still it's limited to 1000 characters. Any suggestion?

I created an other dummy table with a coloumn size of varchar(5000) and that works just fine. Engine used is InnoDB and Default Charset is UTF8.

consequences of using "innodb_flush_method = O_DIRECT" without having a battery backed write cache? or on a KVM guest?

Posted: 23 May 2013 02:09 PM PDT

Mysql 5.5.29 Innodb- 128GB Ram - 32 cores - Raid 10 SSD.

Our server which is a dedicated KVM guest on a 'baremetal' is hosting our heavy read-write DB server. Everything is file-per-table. innodb_Buffer_pool is 96GB with 1GBx2 log_file_size with about 20 minutes of writes to fill up those logs at peak time.

How bad of a situation would it be if O_DIRECT (currently running on the default) was enabled during a high work load without a battery backed write cache and a total crash were to occur on the OS, parent host or the power was cut?

Does a battery backed write cache make a difference if the server is a vm guest of the parent anyway?

.

MySQL Dump configuration file

Posted: 23 May 2013 04:09 PM PDT

I modified th my.cnf file, so that the innodb_data_file_path points somewhere else from the default path. But the mysqldump command seems like trying to dump from the default path.

I found that if i give the option --defaults-file=xxxx in command line i could change it, but is there a way to config mysqldump to use this option without specify it in command line?

How can I set a default session sql_mode for a given user?

Posted: 23 May 2013 12:09 PM PDT

I want to have a default session sql_mode that is different from the global sql_mode.

I can set it manually, but I'd like to do it automatically.

I thought I could add it to .my.cnf, but that doesn't work. I tried adding it to the [mysql] group in .my.cnf, and it does not throw an error.

Yet, when I connect my session sql_mode, the session still inherits the global sql_mode.

I'm using MySQL 5.5.16.

TRUNCATE TABLE statement sometimes hangs

Posted: 23 May 2013 11:09 AM PDT

Why does the TRUNCATE TABLE statement hang sometimes? What are the reasons for this type of issue?

I am migrating from MySQL to MariaDB. This problem doesn't happen with MySQL, only with MariaDB.

The hanging statement is simply:

TRUNCATE TABLE sampledb.datatable;  

What can cause this to happen, and how could I fix it?

Another one observation is if the table have some data, may be one or two rows, then the truncate query works successfully. Else the table have a lot of data, query becomes hang.

Users cannot view tables in non-default schema in SSMS

Posted: 23 May 2013 07:09 PM PDT

I'm having an issue setting the VIEW DEFINITION permission appropriately at the schema level for one of my users. I've created the schema TestSchema and added some tables. The user currently has permissions set to access & modify the table (SELECT, UPDATE, DELETE, etc) through the dbo_datareader and dbo_datawriter roles. However, they cannot see any of the tables in the SSMS object explorer.

I've tried granting permissions to view definitions:

grant view definition on SCHEMA :: [TestSchema] to [User]  

That didn't work. I tried setting the table-level permission:

grant view definition on [TestSchema].[NewTable] to [User]  

That also didn't work. Then I tried just a blanket grant:

grant view definition to [User]  

And that did work; they can now see TestSchema, as well as other schemas that they shouldn't have access to.

My goal here is to allow the user to view all tables within a given schema. How do I accomplish that? If I should be able to do this by default, what permissions should I be looking at to find why I can't?

How do I automatically deliver the SQL Server built in reports?

Posted: 23 May 2013 12:50 PM PDT

When you right click on a number of items in the object explorer, it gives you the option for a reports menu. Various reports may exist depending on which type of item is selected. Yet, there does not appear to be any built-in functionality for automated delivery of these reports. It is hidden away somewhere?

My real question : Is there any way to set some of these up to be automatically delivered through email? The reports I am targeting are the "Job Steps Execution History", and "Top Jobs" reports for SQL Server agent. I am using SQL Server 2005/2008

No comments:

Post a Comment

Search This Blog