Saturday, March 16, 2013

[how to] How to get only month from MySQL date/time?

[how to] How to get only month from MySQL date/time?


How to get only month from MySQL date/time?

Posted: 16 Mar 2013 09:04 PM PDT

In my MySQL server, I have rows that have the date of my created content. Example: 2013-03-17 02:53:47

For now, I am using:

$date = $fetch[thedaterownamegoeshere];

But instead of getting the full content of the date (2013-03-17 02:53:47), I'd like to get only the month – in this example, I wish to have the number "03" as the result. How can I do that?

Thanks.

Back up strategy and recovery plan for production environment

Posted: 16 Mar 2013 04:34 PM PDT

I have been tasked with designing a database server installation, including a high level backup strategy for the following scenario :

  1. Single Server Production Environment with full recovery and redundancy
  2. Backup Data Files & Physical Database Data files should be on separate file systems
  3. Disk Storage and RAID configurations are important
  4. Backup routines should take account of both Database Backup and Backup Archiving to Tape/Other Media
  5. Disk layouts should reflect the differences between Operating System, DBMS Software, Database Files, Transaction Logs and Backup Sets.
  6. Should be able to recover to any point in the last 7 days

I plan on tackling this along the following lines:

To be able to implement full recovery, I plan on implementing a Grandfather-father-son strategy where a weekly full back up is done and tape is held off site. Daily back ups of transaction logs are done and tape is held offsite(this is necessary in order to be able to roll to a point in time). In order to satisfy quick recovery and redundancy, a backup server off site could be used as to perform database mirroring. This would satisfy point 2 above too. As for RAID configuration, would RAID level 1 with just mirroring be sufficient instead of RAID 10 (stripe of mirrors)? If anybody could offer some feedback on my intended course of action and how might I should tackle points 4 and 5 too, I would be grateful

Thanks in advance

Multiligual database design

Posted: 16 Mar 2013 02:47 PM PDT

I assume that the best approach for many languages (5 and more) is concept based on:

    CREATE TABLE `language` (      `language_id` char(2) NOT NULL,      `collation` varchar(64) NOT NULL,      PRIMARY KEY (`language_id`)  );    CREATE TABLE `product` (      `product_id` int(11) NOT NULL auto_increment,      PRIMARY KEY (`product_id`)  );    CREATE TABLE `product_translation` (      `product_id` int(11) NOT NULL,      `language_id` char(2) NOT NULL,      `group_id` int(11) NOT NULL,      `name` varchar(50) NOT NULL,      `url` varchar(50) NOT NULL,      `price` decimal(9, 2) NOT NULL,      `description` text NOT NULL,      UNIQUE KEY `language_id_2` (`language_id`, `url`),      KEY `language_id` (`language_id`, `group_id`, `name`),      FULLTEXT KEY `name` (`name`, `description`),      FOREIGN KEY (`product_id`) REFERENCES `product` (`product_id`),      FOREIGN KEY (`language_id`) REFERENCES `language` (`language_id`),      PRIMARY KEY (`product_id`, `language_id`)  );  

But what if content is filled by user? Should this structure be used only for static content ? Registred user can at any time change his language settings also the data that he is adding (private messages,articles) can be added in any language we cannot force him to add only english content. Should all data be stored in one table and column (body), am I right ?

For example private messages:

    CREATE TABLE `msg_messages` (    `id` int(11) NOT NULL AUTO_INCREMENT,    `thread_id` int(11) NOT NULL,    `body` text NOT NULL,    `priority` int(2) NOT NULL DEFAULT '0',    `sender_id` int(11) NOT NULL,    `cdate` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,    PRIMARY KEY (`id`)  ) ENGINE=InnoDB  DEFAULT CHARSET=utf8;  

How to restore a filegroup from its backup in SQL Server

Posted: 16 Mar 2013 06:11 PM PDT

I need guidance on how to restore a file group in Database from the File group backup taken from another same DB server. Is it possible to restore the filegroup by running restore DB for File Group.

I was trying to restoring filegroup from its back up like below

RESTORE DATABASE  FILEGROUP = 'MFG12' FROM DISK = 'E:\MFG12.bak'    WITH RECOVERY  

The backup set holds a backup of a database other than the existing DBNAME database.

When I was trying to restore in to same DB , restoring works perfectly,

Processed 840976 pages for database 'DB', file 'MFG12_File' on file 1.  RESTORE DATABASE ... FILE= successfully processed 840976 pages in 59.375 seconds (110.654 MB/sec).  

When I run the Select query against partion of filegroup. it throws error

 One of the partitions of index '' for table 'xyz'  (partition ID 72057594656129024) resides on a filegroup ("MFG12")   that cannot be accessed because it is offline, restoring,   or defunct. This may limit the query result  

When I see status of all file group.

I see one of the File Group MFG 12 has status = "Restoring" , How to set back to Online Mode.

When I try to restore log file also.i get below error

The roll forward start point is now at log sequence number (LSN) 66787000000001800001.  Additional roll forward past LSN 66787000000008900001 is required to complete the restore sequence".   

Please help how to bring back missing file group data back even though filegroup backup has been taken from another same Database in another server or in the same server DB

MSSQL Schemas ownership

Posted: 16 Mar 2013 03:08 PM PDT

I'm currently learning MS SQL server 2008 and have come across schemas and there is one thing I don't understand about them that they have to have an owner -Why can't they own themselves?

My background is mostly with Java and it sounds like a schema is like a package which does not have an owner.

MySQL user created via command line can't access PHPMyAdmin

Posted: 16 Mar 2013 11:35 AM PDT

I'm trying to create a read-only user with access to one database in PHPMyAdmin. Since I'm a server newbie I followed some instructions line-by-line on setting up my database and PHPMyAdmin. As such, my root account is not accessible via the GUI; only a full access child account. This account cannot create users via the GUI, so I did the following via command line:

mysql -u root -p  mysql> create user 'readonly-user'@'localhost' IDENTIFIED BY 'password';  mysql> grant select on mydatabase.* to 'readonly-user'@'localhost';  

If I try to log in via PHPMyAdmin as readonly-user, however, I'm not able to get in.

Is there something I need to do to register this account with PHPMyAdmin?

NoSQL approach: design table "scheme"

Posted: 16 Mar 2013 08:30 AM PDT

We need to store simple log data in a database, but we also need to get and filter data in realtime, based on 'user', simple 'tag' (ie. redis, memcache, php, mysql, etc.) and timestamp. We need to scale horizontally and real fast data access on billions rows.

In a SQL approach, table can be like this:

ID | timestamp | tag | user       | log text  1  | 19543532  | 1   | root       | { text log }  2  | 19543532  | 3   | redis-user | { text log }  

where tag 1 and 3 are different and related to another table (ie. tag_id | tag_name). I think this is a relational approach and we can create three index (timestamp, tag and user) in order to speed up data access.

What is a good practice to reproduce this in a NoSQL database like DynamoDB (AWS) where we can create only HASH or RANGE index? Does a SQL database fit better than a DynamoDB?

My first attempt is:

First table: ID hash index

ID | log text  1  | { text log }  2  | { text log }  

Second table: USER, TIMESTAMP range index

user | timestamp | id  root | 123145122 | 1  redis| 123491241 | 2  

Third table: TAG index

tag        | id  debug      | 1  production | 2  

Thank you in advice!

How to implement security model?

Posted: 16 Mar 2013 10:13 AM PDT

I have installed Oracle 11g on fedora 14. I connected with SQL*Plus as 'SYS' and created some tables, then created a user named "account" and a role, "payroll" with the following privileges. I assigned the role to the "account" user but I cannot access the created tables when I log-in as "account".

Here are my queries:

As SYS

create type pay_t as object(  id int,  name varchar(10))  /    create table pay of pay_t;   insert into pay values(pay_t(10,'p01'));    create user account identified by paccount;  grant create session to account;  grant connect to account;    create role pay_roll;  grant select on pay to pay_roll;  grant pay_roll to account;  

As account:

select * from pay;  

This gives me an error saying there is no such table or view.

Changing passphrase for EncryptByPassPhrase

Posted: 16 Mar 2013 07:44 AM PDT

I am using EncryptByPassPhrase to encrypting some columns.

If I want to change the passphrase in the future, how can I change it?

Cluster Node Failed SQL Server 2012

Posted: 16 Mar 2013 08:18 AM PDT

I would like to install SQL Server 2012 but I get some errors. Do you know what I have to download or do to fix this issue?

enter image description here

MySQL Database Recovery from ibdata1 and .frm Files Fails

Posted: 16 Mar 2013 08:20 AM PDT

My MySQL database server crashed but I managed to recover copies of the ibdata1 and .frm files.

I have followed instructions, including some from this site, that say to do a clean install and overwrite the new ibdata1 and .frm files with the old ones.

I have also tried forcing InnoDB recovery with innodb_force_recovery=6 but no success yet. I am guessing that my ibdata1 file is corrupt. The most recent backup I have is like a month old and no good for my purposes.

Anymore ideas on how I could recover my database(s)?

Tape rotation strategies

Posted: 16 Mar 2013 12:24 PM PDT

I am trying to find a tape rotation strategy that would be optimal under the following conditions:

Backup Media Rotation strategy should support the following:

  1. Media Archives required every 28 days retained up to 3 years
  2. Require to recover to any point in time within past 7 days
  3. Can sustain up to 1 week gap in recovery time for next 4 weeks

I believe a good strategy would be to use the Grandfather-father-son strategy. However, I am unsure about point 3 above. Can someone explain what point three entails?

PRIMARY KEY CONSTRAINT fundamentals

Posted: 16 Mar 2013 02:19 PM PDT

Can you explain why "primary key constraint" exists if, as you know, it is possible to create relationships between tables using unique indexes only ?

The RDBMS that is used is SQL Server.

In fact, after creating a unique index (not null), I can create a foreign key constraint on an other table pointing to the column that is indexed without having to declare that this column is "the primary key" ... so, what is the purpose of it ? Is it a question of esthetics ?

Can't choose which date sql server 2005 backup to read in sql server 2008.

Posted: 16 Mar 2013 05:29 AM PDT

I am trying to restore a database from sql server 2005 to sql server 2008. In Management studo 2008, I am pointing to a hard drive copy of the backup file for the database, and am getting only one choice, even though there are multiple backups of different dates on the file. I can only backup the earliest one which is the only one listed. I need a later one off the backup file. How do get the complete to choose from in sql server management studio 2008 from backup file created in management studio 2005?

How to best use connection pooling in SQLAlchemy for PgBouncer transaction-level pooling?

Posted: 16 Mar 2013 04:29 AM PDT

Using SQLAlchemy to query a PostgreSQL database behind PgBouncer, using transaction-level pooling.

What is the best pattern to use for this kind of set up? Should I have one-engine-per-process, using a ConnectionPool, or should I create an engine per-request, and use NullPool for each one of them? Is there a different pattern altogether that I should be using?

Thanks very much! Let me know if more information is needed and I'll update ASAP.

how to chain postgres RULEs?

Posted: 16 Mar 2013 12:22 PM PDT

I have implemented data denormalization strategy using postgresql RULEs. I picked rules instead of triggers for performance reasons.


Schema is structured like this:

  • Application has many clients
  • Client has many projects
  • Project has many users

One part of the system is storing hits for every user in stats table. Hit is an imaginary metric, it is not really relevant. System can collect many of these metrics. There are a lot of records in stats table (> 1,000,000 per day).

I want to know how many hits are per user, per project, per client and per application for given day.

To make it work fast, I've groupped stats by day and stored the output into user_hits table. During this process, also the application_id, client_id and project_id has been added (as columns), and appropriate indexes created.

I want to further optimise the process by grouping things by project_id, client_id and finally application_id. The data pipeline is like this:

stats -> user_hits -> project_hits -> client_hits -> application_hits

I want to make sure when I delete the data from user_hits for given day, that the data in project_hits for that same date is also deleted. This process should propagate to last table in chain.

I defined these simple rules:

CREATE RULE delete_children AS ON DELETE TO user_hits    DO ALSO    DELETE FROM project_hits WHERE day = OLD.day;    CREATE RULE delete_children AS ON DELETE TO project_hits    DO ALSO    DELETE FROM client_hits WHERE day = OLD.day;    CREATE RULE delete_children AS ON DELETE TO client_hits    DO ALSO    DELETE FROM application_hits WHERE day = OLD.day;  

However, when I issue statement like this:

DELETE FROM user_hits WHERE day = current_date;  

I expect it to run these 3 queries in return:

DELETE FROM project_hits WHERE day = current_date;  DELETE FROM client_hits WHERE day = current_date;  DELETE FROM application_hits WHERE day = current_date;  

However, it doesn't.

It completes the operation, but it takes couple of minutes to do that (with test data). With real data it takes hours, while running those 3 queries by hand takes couple of milliseconds. The time it takes seems proportional to number of combinations (users x projects x clients x applications).

What is the problem here? Am I missing something? Can this be implemented with triggers in an optimised way?


Included sample script which reproduces the problem:

https://gist.github.com/assembler/5151102


UPDATE: Transition from user_hits to project_hits (and so forth) is done by worker process in background (since it involves contacting 3rd party services for additional info). It is smart enough to recalculate everything for missing dates. So the only thing i need is a way to DELETE records cascadingly in optimised way.


UPDATE: stats table is filled on daily basis. The only possible scenario is to unconditionally delete the data for whole day and then replace it with new values.


UPDATE: I noticed that the number of affected rows (extracted from explain statement) is exactly equal to the product of affected rows in user_hits, project_hits, client_hits, and application_hits tables (hundreds of millions of rows).

It turns out that it works like this:

  1. I run DELETE FROM user_hits WHERE day = current_date;
  2. For each row in user_hits table, RULE is triggered which deletes EVERY row from project_hits
  3. For each row of project_hits, RULE is triggered which deletes EVERY row from client_hits
  4. For each row of client_hits, RULE is triggered which deletes EVERY row from application_hits

So, the number of operations is equal to the product of count of affected rows in these tables.

How to model inheritance of two tables mysql

Posted: 16 Mar 2013 08:50 AM PDT

I have some tables where I store data and depending on the type of person (worker,civil) that did a job I want to store it a table 'event', now these guys rescue an animal (thereis a table animal).

Finally I want to have a table to store the event that a guy (WORKER,CIVIL), saved an animal, but How should I add a foreign key or how to know the id of the civil or worker that did the job?

Now in this design I do not know how to relate whose person did the job if, I would had only a kind of person (aka civil) i would only store the civil_id in person field in this last table....but how to know if it was civil or worker, do I need other intermediate table?

How to make this design in mysql? enter image description here

could you help me with the sqlfiddle?

UPDATE

I have model it the following way:

DROP    TABLE IF EXISTS `tbl_animal`;   CREATE TABLE `tbl_animal` (      id_animal       INTEGER     NOT NULL PRIMARY KEY AUTO_INCREMENT,      name            VARCHAR(25) NOT NULL DEFAULT "no name",      specie          VARCHAR(10) NOT NULL DEFAULT "Other",      sex             CHAR(1)     NOT NULL DEFAULT "M",      size            VARCHAR(10) NOT NULL DEFAULT "Mini",      edad            VARCHAR(10) NOT NULL DEFAULT "Lact",      pelo            VARCHAR(5 ) NOT NULL DEFAULT "short",      color           VARCHAR(25) NOT NULL DEFAULT "not defined",      ra              VARCHAR(25) NOT NULL DEFAULT "not defined",      CONSTRAINT `uc_Info_Animal` UNIQUE (`id_animal`)             ) ENGINE=InnoDB  DEFAULT CHARSET=utf8;      INSERT INTO `tbl_animal` VALUES (1,'no name', 'dog', 'M','Mini','Lact','Long','black','Bobtail');  INSERT INTO `tbl_animal` VALUES (2,'peluchin', 'cat', 'M','Mini','Lact','Long','white','not defined');  INSERT INTO `tbl_animal` VALUES (3,'asechin', 'cat', 'M','Mini','Lact','Corto','orange','not defined');    DROP    TABLE IF EXISTS `tbl_person`;    CREATE TABLE `tbl_person` (      type_person  VARCHAR(50) NOT NULL primary key          ) ENGINE=InnoDB  DEFAULT CHARSET=utf8;  INSERT INTO `tbl_person` (type_person) VALUES ('Worker');  INSERT INTO `tbl_person` (type_person) VALUES ('Civil');        DROP    TABLE IF EXISTS `tbl_worker`;    CREATE TABLE `tbl_worker`(      id_worker           INTEGER  NOT NULL PRIMARY KEY,      type_person         VARCHAR(50) NOT NULL ,       name_worker         VARCHAR(50) NOT NULL ,          address_worker      VARCHAR(40) NOT NULL DEFAULT "not defined",           delegation          VARCHAR(40) NOT NULL DEFAULT "not defined",      FOREIGN KEY (type_person)               REFERENCES `tbl_person` (type_person),      CONSTRAINT `uc_Info_worker` UNIQUE (`id_worker`)             ) ENGINE=InnoDB  DEFAULT CHARSET=utf8;    INSERT INTO `tbl_worker` VALUES (1,'Worker','N_CEDENTE1', 'DIR Worker 1', 'DEL');  INSERT INTO `tbl_worker` VALUES (2,'Worker','N_worker1', 'DIR Worker 2', 'DEL');  INSERT INTO `tbl_worker` VALUES (3,'Worker','N_worker2', 'address worker','delegation worker');       DROP    TABLE IF EXISTS `tbl_civil`;   CREATE TABLE `tbl_civil`(      id_civil                        INTEGER  NOT NULL PRIMARY KEY,      type_person         VARCHAR(50) NOT NULL ,      name_civil                      VARCHAR(50)  ,      procedence_civil                VARCHAR(40)  NOT NULL DEFAULT "Socorrism",        FOREIGN KEY (type_person)             REFERENCES `tbl_person` (type_person),      CONSTRAINT `uc_Info_civil` UNIQUE (`id_civil`)             ) ENGINE=InnoDB  DEFAULT CHARSET=utf8;      INSERT INTO `tbl_civil`  VALUES (1,'Civil','N_civil1' , 'Socorrism');      CREATE TABLE `tbl_event` (      id_event     INTEGER NOT NULL,      id_animal    INTEGER NOT NULL,      type_person  VARCHAR(50) NOT NULL ,       date_reception DATE DEFAULT '2000-01-01 01:01:01',      FOREIGN KEY (id_animal)   REFERENCES `tbl_animal`    (id_animal),      FOREIGN KEY (type_person )  REFERENCES `tbl_person`   (type_person ),      CONSTRAINT `uc_Info_ficha_primer_ingreso` UNIQUE (`id_animal`,`id_event`)       )ENGINE=InnoDB  DEFAULT CHARSET=utf8;    INSERT INTO `tbl_event` VALUES (1,1, 'Worker','2013-01-01 01:01:01' );  INSERT INTO `tbl_event` VALUES (2,2, 'Civil','2013-01-01 01:01:01' );  

However, is there a way to get rid of nulls? the queries I have are:

SELECT  a.*,b.*,z.*  FROM    tbl_event a          left JOIN tbl_worker b              ON a.type_person = b.type_person          left JOIN tbl_animal z              ON   z.id_animal = a.id_animal ;    SELECT  a.*,b.*,z.*  FROM    tbl_event a          left JOIN tbl_civil b              ON a.type_person = b.type_person          left JOIN tbl_animal z              ON   z.id_animal = a.id_animal ;  

Here is updated sqlfiddle

Naming foreign key columns

Posted: 16 Mar 2013 06:41 PM PDT

Now I hear that the best practice for naming foreign key columns is the name of the table and field (and maybe the function if there are multiple foreign keys to the same table), for example, a foreign key that references the users table id column would be:

creatorUserId  

Now the thing is that right now I am building my application and for the sake of development speed, I am going to be referencing everything by a surrogate foreign key (INT AUTO_INCREMENT). I really don't want to get into the whole which is faster than what in which case this early in development, I rather leave that stuff to the end (to try to prevent premature optimizations and I am more interested in working on the front-end of this application then the backend).

With that being said, a concern I have is if I in-fact want to change some of the foreign keys from the surrogate key to a natural key (all relavent tables will have both) and I use column names like

creatorUserId   

I am going to have to go through all the backend code in order to make the change, a process that would not be very pleasant.

I was thinking about doing something that just use the function and table name for foreign keys column like:

creatorUser  

In the case, if I change what column I am using for the foreign key, the backend code changes are a lot simpler and there would be a lot less of them (if you are using an ORM which I am). Also, since surrogate keys are generally numbers and natural keys are generally alpha characters, it should still be relative easy to know which one is being used just by looking at the value.

Would this be considered bad practice?

SELECTing multiple columns through a subquery

Posted: 16 Mar 2013 05:41 AM PDT

I am trying to SELECT 2 columns from the subquery in the following query, but unable to do so. Tried creating alias table, but still couldn't get them.

SELECT DISTINCT petid, userid,  (SELECT MAX(comDate) FROM comments WHERE petid=pet.id) AS lastComDate,  (SELECT userid FROM comments WHERE petid=pet.id ORDER BY id DESC LIMIT 1) AS lastPosterID    FROM pet LEFT JOIN comments ON pet.id = comments.petid  WHERE userid='ABC' AND deviceID!='ABC' AND comDate>=DATE_SUB(CURRENT_TIMESTAMP, INTERVAL 2 MONTH);  

Basically, I am trying to get the lastComDate & lastPosterID from the same row - the row which is the latest one in comments for the specific pet. Please suggest how can I get them in an efficient way.

The above query works, but seems overkill as same row is fetched twice. Moreover, the ORDER BY clause is significantly slower than the aggregate function - as I found while profiling query. So, a solution avoiding sorting would be appreciated.

Designing Simple Schema for Disaggregation of Demand Forecast

Posted: 16 Mar 2013 11:41 AM PDT

I am doing a simple database design task as a training exercise where I have to come up with a basic schema design for the following case:

I have a parent-child hierarchy of products (example, Raw Material > Work in Progress > End Product).

  • Orders are placed at each level.
  • Number of orders shall be viewable in weekly buckets for the next 6 months.
  • Demand forecast can be done for each product level.
  • Demand forecast for any week within next 6 months can be done today.
  • Demand forecast is done for weekly buckets, for the next 6 months.

Demand Forecast is usually done at the higher level in hierarchy (Raw Material or Work in Progress level) It has to be disaggregated to a lower level (End Product).

There are 2 ways in which demand forecast can be disaggregated from a higher level to lower level:

  1. User specifies percentage distribution for end product. Say, there's a forecast of 1000 for Work In Progress.. and user says I want 40% for End Product 1 and 60% for End Product 2 in bucket 10.. Then for 10th week (Sunday to Saturday) from now, forecast value for End Product 1 would be 400 and, for End Product 2 would be 600.
  2. User says, just disaggregate according to orders placed against end products in Bucket 5, and orders in bucket 5 for End Product 1 and 2 are 200 and 800 respectively, then forecast value for EP1 would be ((200/1000) * 100)% and for EP2 would be ((800/1000) * 100)% of forecast for 'Work in Progress'.

Forecast shall be viewable in weekly buckets for the next 6 months and the ideal format should be:

product name | bucket number | week start date | week end date | forecast value | created_on  

PRODUCT_HIERARCHY table could look like this:

id  |   name                |   parent_id  __________________________________________  1   |   raw material        |   (null)  2   |   work in progress    |   1  3   |   end product 1       |   2  4   |   end product 2       |   2  

ORDERS table might look like this:

id | prod_id | order_date | delivery_date | delivered_date  

where,

prod_id is foreign key that references id of PRODUCT_HIERARCHY table,

How to store forecast? What would be a good basic schema for such a requirement?


My idea to select orders for 26 weekly buckets is:

SELECT      COUNT(*) TOTAL_ORDERS,      WIDTH_BUCKET(          delivery_date,          SYSDATE,          ADD_MONTHS(sysdate, 6),           TO_NUMBER( TO_CHAR(SYSDATE,'DD-MON-YYYY') - TO_CHAR(ADD_MONTHS(sysdate, 6),'DD-MON-YYYY') ) / 7      ) BUCKET_NO  FROM      orders_table  WHERE      delivery_date BETWEEN SYSDATE AND ADD_MONTHS(sysdate, 6);  

But this will give weekly buckets starting from today irrespective of the day. How can I convert them to Sunday to Saturday weeks in Oracle?

Please help designing this database structure.

(will be using Oracle 11g)

DB2 Authentication TRUST_CLNTAUTH

Posted: 16 Mar 2013 03:41 PM PDT

I am using db2inst1 to connect to a database in DB2 which I have installed on my machine. Therefore, db2inst1 user does not require username/password authentication (borrows them from the OS). I would like to change that, and force every time a connection is initiated a username/password to be requested.

More specifically, this is how the authentication configuration looks like:

db2 get dbm cfg|grep -i auth

 GSS Plugin for Local Authorization    (LOCAL_GSSPLUGIN) =    Server Connection Authentication          (SRVCON_AUTH) = NOT_SPECIFIED   Database manager authentication        (AUTHENTICATION) = CLIENT   Alternate authentication           (ALTERNATE_AUTH_ENC) = NOT_SPECIFIED   Cataloging allowed without authority   (CATALOG_NOAUTH) = NO   Trusted client authentication          (TRUST_CLNTAUTH) = SERVER   Bypass federated authentication            (FED_NOAUTH) = NO  

db2 connect to dbName

   Database Connection Information       Database server        = DB2/LINUXX8664 10.1.0     SQL authorization ID   = DB2INST1     Local database alias   = DBNAME  

db2 connect to dbName user db2inst1 using password

   SQL1639N  The database server was unable to perform authentication because      security-related database manager files on the server do not have the required      operating system permissions.  SQLSTATE=08001  

I have played with some authentication combinations for "AUTHENTICATION" and "TRUST_CLNTAUTH" without much luck.

SQL Server BPA 2008R2

Posted: 16 Mar 2013 09:41 AM PDT

I have been using SQL server BPA for getting good information from Microsoft. I was using its 2005 version in which i used to export the results in csv format but recently I got two new servers which has got SQL Server 2008 R2 installed on it and I know I can't run BPA 2005 on these, so chose the R2 version of it, but it doesn't have an option to save the report on csv format, only xml, I have tried the excel to convert it into csv but no use, even-though it display the details but can't narrow down the results any idea of converting the results to csv format?

SQL Server 2008 R2 Express password overwrite

Posted: 16 Mar 2013 07:41 AM PDT

I'm using a program that requires SA log-in to SQL Server. We had issues where some 3rd party support numpty changed the SA password and never informed us, so the program couldn't log-in.

I've accessed our information via the SQL Server Management Studio. Fortunately, the 'remember password' option had been checked on the machine.

Via the object explorer, I've accessed the SA log-in credentials and changed the password accordingly. However, every time I re-load the SQL Server Management Studio, the password re-sets to the original one.

When I test the connection of the program that uses the SA log-in by manually entering the new password it appears to work OK. But when I let the program start-up automatically, Windows services is detailing that the log-in is failing due to incorrect log-in details - which I'm guessing is because SQL Server keeps reverting back to the old password?

Can anyone advise how I stop it from reverting back to the old password and overwrite with the new password definitively? (SQL Server 2008 R2 Express) I'm hoping that should help fix my more specific problem with the other program?

Setting up DRBD on an active MySQL server

Posted: 16 Mar 2013 06:42 PM PDT

When it comes to setting up DRBD and MySQL, is the following possible?

  • Set up DRBD on an active MySQL server
  • Set up DRBD with no downtime allowed

MySQL optimization - year column grouping - using temporary table, filesort

Posted: 16 Mar 2013 01:41 PM PDT

I have a transactions table which is having 600,000 records, I need to list the count for the dashboard on financial year basis. The table used is MyISAM. I tried adding index for the transaction date (tran_date). Even though it is using the index it creates temporary table which is taking more time because of the temporary table and the filesort. Is there any way to optimize the query to improve the query time?

  SELECT COUNT( * ) AS cnt, CASE WHEN MONTH( tran_date ) >=3  THEN concat( YEAR( tran_date ) , '-', YEAR( tran_date ) +1 )  ELSE concat( YEAR( tran_date ) -1, '-', YEAR( tran_date ) )  END AS financial_year  FROM `transactions1`  WHERE tran_date >= '2010-06-01'  GROUP BY financial_year    Showing rows 0 - 4 (5 total, Query took 1.2095 sec)  
  id select_type  table       type  possible_keys     key key_len    ref  rows    Extra  1   SIMPLE    transactions1 range PRIMARY,tran_date tran_date 8  NULL   346485  Using where; Using index; Using temporary; Using filesort  
  Keyname     Type    Unique  Packed  Field       Cardinality   Collation   PRIMARY     BTREE   Yes       No    tran_date      205720         A                                           tran_ID        617162         A   coupon_No   BTREE   No        No    coupon_No      617162         A       account_typeBTREE   No        No    account_type   3              A       prodCode    BTREE   No        No    prodCode       430            A                                           tran_date      308581         A   tran_date   BTREE   No        No    tran_date      205720         A       cust_ID     BTREE   No        No    cust_ID        3265           A                                           tran_date      308581         A                                       account_type   308581         A                                       points_earned  617162         A  

Update :

Tried adding partition which is not that much helpful in comparison with non partitioned one. Does replication help in this case for reading this table?. There will be more grouping based on the dates (using the date functions) when reading the data.

Edit:

I altered the query and reduced the query execution time. The query I used is,

  SELECT SUM( count )  FROM (  SELECT COUNT( * ) AS count,  CASE WHEN MONTH( tran_date ) >=3  THEN concat( YEAR( tran_date ) , '-', YEAR( tran_date ) +1 )  ELSE concat( YEAR( tran_date ) -1, '-', YEAR( tran_date ) )  END AS format_date  FROM transactions1  GROUP BY tran_date  ) AS s  GROUP BY format_date    Showing rows 0 - 4 (5 total, Query took 0.5636 sec)  
  id  select_type     table     type  possible_keys   key     key_len     ref     rows    Extra  1   PRIMARY     <derived2>    ALL       NULL        NULL      NULL      NULL    229676  Using temporary; Using filesort  2   DERIVED     transactions1 index     NULL        tran_date   8       NULL    617162  Using index  

But when using

  SELECT COUNT( * ) AS count,  CASE WHEN MONTH( tran_date ) >=3  THEN concat( YEAR( tran_date ) , '-', YEAR( tran_date ) +1 )  ELSE concat( YEAR( tran_date ) -1, '-', YEAR( tran_date ) )  END AS format_date  FROM transactions1  GROUP BY tran_date    Showing rows 0 - 29 (229,676 total, Query took 0.0006 sec)  

gives less time without using the SUM(count) in the derived table. Is there any other way to get the sum without using the subquery in MySQL or can the subquery be optimized to get the index.

MySQL join for multiple child records in a single joined row

Posted: 16 Mar 2013 02:41 PM PDT

I have one master table (teacher) structured like

teacherId   Name        Class  1       Praveen     10  2       John        9  

and having a child table (student) structured like

studentId   teacherId   Name  1       1       David  2       1       Creg  3       2       Mike  4       2       Steve  5       2       Jim  

How I can get a result set like

teacherId   teacher     studentId_1 student_1   studentId_2 student_2   studentId_3 student_3    1       Praveen     1       David       2       Creg        null        null  2       John        3       Mike        4       Steve       5       Jim  

Thanks in advance..

Regards, Praveen

What's the difference between a temp table and table variable in SQL Server?

Posted: 16 Mar 2013 01:43 PM PDT

This seems to be an area with quite a few myths and conflicting views.

So what is the difference between a table variable and a local temporary table in SQL Server?

How can I change the default storage engine in phpmyadmin?

Posted: 16 Mar 2013 01:33 PM PDT

I use InnoDB almost exclusively in my applications. However, if I'm not careful when setting up the table, I forget to change it and phpmyadmin sticks me with MyISAM. Is there a way to change the default storage engine?

No comments:

Post a Comment

Search This Blog