Monday, August 12, 2013

[how to] Can Mysql Cluster AQL(Push-Down Join) be trigger with sharding table at the same time?

[how to] Can Mysql Cluster AQL(Push-Down Join) be trigger with sharding table at the same time?


Can Mysql Cluster AQL(Push-Down Join) be trigger with sharding table at the same time?

Posted: 12 Aug 2013 06:45 PM PDT

From the jonas slide, I knew that the feature of "Push-Down Join" can make join 70x faster. the most important factor is to eliminate the round trip between API node and data nodes. where all data in the same node, it is not necessary to do inter-node communication. But what if a certain table is partitioned to some group of data nodes, how AQL eliminate the data shipping? can push-down work in sharding scene?

How to create Combination of records (Order does not matter, no repetition allowed) in mySQL tables

Posted: 12 Aug 2013 08:13 PM PDT

I've got a table that has hundreds of rows, each row is a recipe with nutritional information, for example:

recipe_table:

id  | calories | protein| carbs | fat    recipe1, 100,    20g,     10g,     2g  recipe2, 110,    10g,     12g,     12g  recipe3, 240,    20g,     1g,      23g  ....

I needed to create a new table (recipe_index) that would show every possible combination of every recipe in recipe_table as a set of 3, so it would look something like:

recipe_index:

id1     | id2    | id3    |calories| protein | carbs | fat  recipe1, recipe2, recipe3,   450,     50g,      23g,   37g  ....

Basically it allows me to query recipe_index and say "what 3 recipe combinations come to a total value that's between 440 calories and 460 calories"

My current code for doing this works at 3 meals, however I end up with about 450,000 records in recipe_index, I need to do this same thing for 4,5 and 6 meals as well, so I'm calculating billions of records at the end of this. Is there a more efficient way of doing this? Perhaps I need to look into partitioning a table for each range?

My current SQL code:

INSERT INTO recipe_index  SELECT distinct '3' as nummeals, t1.id as id1, t2.id as id2, t3.id as id3, 0 as id4,     t1.calories_ps+t2.calories_ps+t3.calories_ps as calories,      t1.protein_ps+t2.protein_ps+t3.protein_ps as    protein, t1.carbohydrate_ps+t2.carbohydrate_ps+t3.carbohydrate_ps as carbohydrate,   t1.fat_ps+t2.fat_ps+t3.fat_ps as fat from recipes t1 inner join  recipes t2  on t1.Id <        t2.Id inner join  recipes t3  on t2.Id < t3.Id WHERE t1.image <> '' AND t2.image <> ''     AND t3.image <> ''  

How to partition an existing non-partitioned table

Posted: 12 Aug 2013 08:01 PM PDT

I have an existing table with data:

dbo.Test (col1,col2,col3....) ON [PRIMARY]  

I need to change this table to be partitioned like this:

dbo.Test(col1,col2,col3....) ON Ps_Date(Col2)  

How I can I achieve this without dropping and recreating the table?

Conversions between any data type and the binary data types are not guaranteed to be the same between versions of SQL Server

Posted: 12 Aug 2013 04:06 PM PDT

I just found that on the SQL Server 2012 MSDN documentation for binary and varbinary types, they mention the following at the end of the article:

Conversions between any data type and the binary data types are not guaranteed to be the same between versions of SQL Server.

I noticed that this does not exists in the previous versions of the article for 2005, 2008, and 2008 R2. I searched around online but didn't find any exact explanations that satisfied my question of "WHY?". (Which is the question I have here.)

The project I am working on needs to store data as varbinary(max), but based on this little note, I am afraid if I need to move the binary data and then convert it on another SQL Server version that it will not match the original data converted to binary, so answering this question more in depth would be a great help.

Thanks!

Updating a big replicated Dimension (SQL Server PDW)

Posted: 12 Aug 2013 04:10 PM PDT

We use a SQL Server PDW appliance for our data warehouse. One of the tables in our warehouse is a replicated table with about 20 million rows. As part of our ETL process we need to expire old records from this dimension; however, we are seeing that updating a handful of records (<100) takes over 1 hour to complete. This is what I would like to improve if I can.

Naturally, one option that I thought about was changing this Dimension from Replicated to Distributed. My testing shows that it would fix the issue with the ETL process taking long (from 1.5 hours came down to 30 secs) but all the joins against the Distributed version of this dimension would be affected since the joins are almost never based on the same distribution column. When I look at the execution plan of some of these queries I usually see either a ShuffleMove or a BroadcastMove operation.

So my question to the PDW guru's here is:

Is there anything else that can be done in order to improve the performance of updating records in the replicated version of this Dimension?

Again, moving to a Distributed table doesn't seem to be the best solution since it will affect hundreds of already written SQL queries and reports developed by other people.

having trouble using dbms_scheduler

Posted: 12 Aug 2013 01:40 PM PDT

I am using oracle 11g and have never used dbms_scheduler jobs. I have this query that I want to update every day at 5.

BEGIN
SYS.DBMS_SCHEDULER.CREATE_JOB
(
job_name => 'UPDATE_VIEW_ACCTUSE'
,start_date => trunc(sysdate) + 8/24
,repeat_interval => 'freq=MINUTELY;interval=2'
,end_date => trunc(sysdate) + 20/24
,job_class => 'DEFAULT_JOB_CLASS'
,job_action => 'declare
in_clause varchar2(2560);
sel_query varchar2(2560);
n number := 0;
begin
for x in (select distinct userfieldcd from acctuserfield)
loop
if n <> 0 then
in_clause := in_clause || ', ';
end if;
in_clause := in_clause || '''' || x.userfieldcd || '''';
n := 1;
end loop;
sel_query := 'select * from (select Acctnbr, userfieldcd, value from acctuserfield) pivot (max(value) for userfieldcd in ('||in_clause||'))';
dbms_output.put_line (sel_query);
execute immediate 'create or replace view VIEW_ACCTUSE as ' ||sel_query;
end;'
,comments => NULL
);
END

I put my query in the job_action. Is that how you do it?

Are passwordless logins like ##MS_SQLResourceSigningCertificate## , ##MS_AgentSigningCertificate## a vulnerability?

Posted: 12 Aug 2013 11:39 AM PDT

I'm new to MS SQL Server.

Colleagues working in IT security, have run an scan showing come DB users with null password.

Some of them are ( # included )

##MS_SQLResourceSigningCertificate##  ##MS_SQLReplicationSigningCertificate##  ##MS_SQLAuthenticatorCertificate##  ##MS_PolicySigningCertificate##  ##MS_SmoExtendedSigningCertificate##  ##MS_AgentSigningCertificate##  

I suspect they are not a security threat but, since they don't use Windows Authentication either, I don't know how to support that assumption.

What are these logins for ?

PostgreSQL postmaster will not start after configuring for SymmetricDS

Posted: 12 Aug 2013 11:45 AM PDT

I'm trying to configure SymmetricDS for use with PostgreSQL, a database platform widely supported by the former.

When running through the quick-start tutorial (which leads you to some pre-setup documentation), I am instructed to append the following line to postgresql.conf:

custom_variable_classes = 'symmetric'  

The next step of the tutorial requires the postmaster to be running (at least that's what I glean from the stack trace...), so I

service postgresql-9.2 start  

but this fails.

Without this single line added (or if it's commented out), PostgreSQL starts as expected.

I must be doing something wrong, so what is it?

  1. yum install postgresql92 postgresql92-server
  2. Download and unzip SymmetricDS into ~
  3. service postgresql-9.2 initdb
  4. Edit /var/lib/pgsql/9.2/data/postgresql.conf as above

Create a new database in SQL Server 2008 R2, but it is not Blank

Posted: 12 Aug 2013 01:06 PM PDT

When I try to create new database in SQL Server 2008 R2, it was created completely, but it is not a blank and empty database, there are many tables that come from another database. I also tried specifying a different name for its .mdf and .ldf file, but nothing changed!

Would you please help me to know where is the problem?

Also, is there any command that would reset my database as a blank db?

Mathematical method for spotting irregularity in numbers flow

Posted: 12 Aug 2013 01:08 PM PDT

I am trying to build an engine which is going to predict orders for items inside shop.
Three main variables are:

  1. Time-dates (x-axe etc 1.1.2013,1.2.2013 etc I do calculation one per day)

  2. Available amount of an item. (green line inside graphs)
    This line represend available amount of an item.
    And as item being sold it decreases.

  3. Dependent on new order (time until new order. Purple line on graph)
    If we making an order every monday this line is going to be 0 at monday and 7 at satutday.It is not always 7 days base some time it is 7 some time is 14 etc depending on supplier.
    It is reset with every time when supplier bring items to warehause.

In next three graphs I'll show three possible situation which I am hunting.

1.) Ideal situation, I sold out my items just before my next order is coming .
I want to react on 9th day (blue) to send new order to supplier

Ideal situation, I sold out my items just before my next order is coming

2.) Bad situation when I sold faster when my order is coming. I want to react at 3th (orange) day . So I can warn user that he is sold items fast. Bad I sold out my items before time of

  1. Bad situation Items are going out too slow and at end of order cycle I will have
    overbalance items on stock.enter image description here

Here is sample data of (MSSSQL)

CREATE TABLE #stock(      id int NOT NULL,      item_id int not null,      ts datetime not null,      amount float not null)    create table #orders (   id int not null  ,item_id int not null  ,ts datetime not null  ,days_until_next int not null)    create table #items  (id int not null  ,name varchar(50))    insert into #items  select 1,'Good art'  union all  select 2,'Fast art'  union all  select 3,'Slow art'      insert into #orders  select 1,1,'2013-01-01',8  union all  select 2,2,'2013-01-01',8  union all  select 3,3,'2013-01-01',8    insert into #stock  select 1,1,'2013-01-01',8  union all   select 2,1,'2013-01-02',7  union all   select 3,1,'2013-01-03',6  union all   select 4,1,'2013-01-04',5  union all   select 5,1,'2013-01-05',4  union all   select 6,1,'2013-01-06',3  union all   select 7,1,'2013-01-07',2      union all  select 8,2,'2013-01-01',200  union all   select 9,2,'2013-01-02',150  union all   select 10,2,'2013-01-03',100  union all   select 11,2,'2013-01-04',50  union all   select 12,2,'2013-01-05',0  union all   select 13,2,'2013-01-06',0  union all   select 14,2,'2013-01-07',0        union all  select 15,3,'2013-01-01',100  union all   select 16,3,'2013-01-02',80  union all   select 17,3,'2013-01-03',70  union all   select 18,3,'2013-01-04',70  union all   select 19,3,'2013-01-05',60  union all   select 20,3,'2013-01-06',60  union all   select 21,3,'2013-01-07',50    select * from #items  select * from #orders  select * from #stock  

I want to construct a query which is going to be run every day and try to find these situations.

SQL Server insert randomly takes 10+ seconds to complete?

Posted: 12 Aug 2013 10:48 AM PDT

I have an insert statement that is being generated by the .NET Entity Framework. In most cases, this particular insert will execute in 0ms according SQL Server Profiler. One out of every 30 or so inserts will jump to as high as 12 seconds duration, causing the .NET client on the other end to show as "Not Responding" while it waits. Server load should not be an issue as our server is very, very lightly loaded.

Here is the table the insert is being performed against:

SET ANSI_NULLS ON  GO    SET QUOTED_IDENTIFIER ON  GO    SET ANSI_PADDING ON  GO    CREATE TABLE [dbo].[ProductEvents](  [EventID] [int] IDENTITY(1,1) NOT NULL,  [KID] [int] NOT NULL,  [EventDescription] [varchar](50) NOT NULL,  [EventDate] [datetime] NOT NULL,  [UserName] [varchar](50) NOT NULL,  [Notes] [varchar](max) NOT NULL,  [Version] [timestamp] NOT NULL,  [IsSynchronized] [bit] NOT NULL,  [LastSyncDate] [datetime] NULL,   CONSTRAINT [PK_ProductEvents] PRIMARY KEY CLUSTERED   ([EventID] ASC ) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,   IGNORE_DUP_KEY = OFF,ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]  ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]    GO    SET ANSI_PADDING OFF  GO    ALTER TABLE [dbo].[ProductEvents] ADD  CONSTRAINT [DF_ProductEvents_IsSychronized]  DEFAULT ((0)) FOR [IsSynchronized]  GO    ALTER TABLE [dbo].[ProductEvents]  WITH CHECK ADD  CONSTRAINT  [FK_ProductEvents_Products] FOREIGN KEY([KID])  REFERENCES [dbo].[Products] ([KID])  ON DELETE CASCADE  GO    ALTER TABLE [dbo].[ProductEvents] CHECK CONSTRAINT [FK_ProductEvents_Products]  GO  

And the query as seen by SQL Server Profiler (actual example):

exec sp_executesql N'insert [dbo].[ProductEvents]([KID], [EventDescription],  [EventDate], [UserName], [Notes], [IsSynchronized], [LastSyncDate])  values (@0, @1, @2, @3, @4, @5, null)  select [EventID], [Version]  from [dbo].[ProductEvents]  where @@ROWCOUNT > 0 and [EventID] = scope_identity()',N'@0 int,@1 varchar(50),@2   datetime2(7),@3 varchar(50),@4 varchar(max) ,@5   bit',@0=1894,@1='Modified',@2='2013-08-12   08:09:25.4766233',@3='KNEXTION\aellison',  @4='Description changed from Mini Awareness Ribbon Cookie Cutter - RM 1698 to Mini   Awareness Ribbon Cookie Cutter - R&M 1698.',@5=0  

I will also be glad to post the Query Plan, but I'm not sure in what format I should post it (I'm mainly a StackOverflow guy).

EDIT: Here is the execution plan as a screenshot from SSMS.

Execution Plan

Any ideas on how to begin to track this down?

Defining a process for troubleshooting performance issues [on hold]

Posted: 12 Aug 2013 11:50 AM PDT

The issue of poor database performance is one which presents itself from time to time and there are many techniques which can be employed to examine the root cause of slowness.

Does anyone have a clearly defined process which they use to troubleshoot performance, like a tried and tested 'works for me' method for investigating issues?

For me I might employ the following (not necessarily in this order)

  • check if data index/statistics maintenance ran successfully (if possible/appropriate re-run if failed)
  • examine fragmentation of indexes (rebuild/re-organ depending on frag amount)
  • examine the last time statistics were updated (maybe update stats depending on age and whether a large number of data changes had taken place)
  • check if there's any blocking (kill off the offending process if appropriate)

What would others add/delete from the above list?

A repeatable process for troubleshooting for all performance issues might be difficult to gather but it were possible it would certainly be useful starting point :)

is there a recommended way to update many postgres schemes

Posted: 12 Aug 2013 11:46 AM PDT

I have a database in postgres which has 15 schemes, the schemes are identical with the same tables, views and functions.

the system uses one scheme at a time, as if they were separate databases. So when i install an update i need to update the 15 schemes in the database.

My problem comes when we update the database with changes in functions and triggers, because I need to modify in each function what scheme uses for each scheme; example:

CREATE OR REPLACE FUNCTION cliente11."restablecerComprobante"("numeroComprobante" text)    RETURNS integer AS  $BODY$  DECLARE idComprobante INTEGER;  DECLARE tipoComprobante TEXT;  DECLARE idCliente INTEGER;  DECLARE fechaComprobante DATE;  DECLARE totalComprobante NUMERIC;  DECLARE estadoComprobante INTEGER;  DECLARE percepcionIIBB NUMERIC;  DECLARE percepcionIIBBxKg NUMERIC;  DECLARE percepcionIVA NUMERIC;  BEGIN        -- cargo los datos del comprobante      SELECT id, tipo, cli_id, fecha_comprobante, total_comprobante, percepcion_iibb, percepcion_iibb_kg, percepcion_iva INTO idComprobante, tipoComprobante, idCliente, fechaComprobante, totalComprobante, percepcionIIBB, percepcionIIBBxKg, percepcionIVA FROM cliente11.lista_comprobantes_a_liberar WHERE estado = 0 AND numero_comprobante ILIKE "numeroComprobante";        -- otras consultas...        -- genero las comisiones      SELECT cliente11."generarComisionesVenta"(idComprobante, tipoComprobante) INTO estadoComprobante;    RETURN 1;  END;  $BODY$    LANGUAGE plpgsql VOLATILE    COST 100;  

Is there a more practical way to update the 15 schemes without having to modify the script to run in each scheme?

ORA-01031 while creating a view as DBA

Posted: 12 Aug 2013 12:25 PM PDT

I'm trying to create a view under my DBA schema on Oracle XE. I already did that on the productive database and it worked with a user with a lot lesser privileges than the DBA role. But now I always get ORA-01031 errors. Here is my query:

CREATE OR REPLACE VIEW usr_v_user_not_reg AS    SELECT username "User", db_instance "Instance",      (        CASE           WHEN username IN          (            SELECT username            FROM sys.dba_users            MINUS            SELECT username            FROM usr_t_user_reg          ) THEN 'not registered'          WHEN username IN          (            SELECT username            FROM usr_t_user_reg            MINUS            SELECT username            FROM sys.dba_users          ) THEN 'no longer present'        END      ) "Status"    FROM usr_t_user_reg    WHERE db_instanz = 'TEST_DB'      AND username NOT IN      (        SELECT username        FROM usr_t_user_reg        INTERSECT        SELECT username        FROM sys.dba_users      )  ;  

And this is the output I get:

Line: 10 Column:20  SQL-Error: ORA-01031: insufficient privileges  01031. 00000 -  "insufficient privileges"  *Cause:    An attempt was made to change the current username or password             without the appropriate privilege. This error also occurs if             attempting to install a database without the necessary operating             system privileges.             When Trusted Oracle is configure in DBMS MAC, this error may occur             if the user was granted the necessary privilege at a higher label             than the current login.  

I even tried it with the XE SYS account, but with the same result. Am I missing something important?

Percona mysql server slow queries

Posted: 12 Aug 2013 09:04 PM PDT

Recently I bought new database server and installed percona mysql server 5.5.32-31 and transferred my /var/lib/mysql directory from old database server having Percona mysql server 5.5.28 to new server. My both servers configurations files are same. The new server has double storage and thrice Ram than old one. I am getting slow queries on new database server.

Possible to run multiple Reporting Services (SSRS) instances on different machines with databases on same machine?

Posted: 12 Aug 2013 11:18 AM PDT

With SQL Server 2012, can SSRS instances be setup on two different machines, with both instances using the same machine for their SQL Server databases?

Machine 1

  • SQL Server 2012 Database Server
  • SSRS instance 1

Machine 2

  • SSRS instance 2 pointed to machine 1's database server

There's this document describing installing multiple instances in 2005, but on the same machine. Can you point me to a better resource for our design? Is this still supported on 2012? How are the SSRS databases (ReportServer, ReportServerTempDB) created for the 2nd instance?

Oracle won't start

Posted: 12 Aug 2013 12:24 PM PDT

While it was working fine, I had to stop the server once. When trying to start Oracle using a script we have, I got the following error:

SQL> Connected to an idle instance.  SQL> ORA-01081: cannot start already-running ORACLE - shut it down first  SQL> Disconnected  

Also when trying to start SQL Plus manually AS SYSDBA I get:

ERROR:  ORA-12162: TNS:net service name is incorrectly specified  

Using SQL Plus with other users, I get:

ERROR:  ORA-12537: TNS:connection closed  

Any help appreciated ...

MariaDB Galera Cluster Replication error, no state received?

Posted: 12 Aug 2013 11:29 AM PDT

I have a very minimal, fresh out of the box setup for MariaDB with Galera clustering. My master node of the cluster is able to run, and is reporting that there is only one node attached to the cluster, the master itself. When I try attaching the second node to the cluster I am receiving a state received error and the process errors out, and fails. The configuration on the master looks like this:

[mariadb]  wsrep_cluster_address=gcomm://  wsrep_provider=/usr/lib64/galera/libgalera_smm.so  binlog_format=ROW  default_storage_engine=InnoDB  innodb_autoinc_lock_mode=2  innodb_locks_unsafe_for_binlog=1  

This is in /etc/my.cnf.d/zabbix_cluster.cnf. The slave node looks similar, except it has the name of the Master node in it. When I run service mysql restart on the slave node, the output says that MySQL was start successfully, but when I do pgrep mysql it returns nothing. Upon examination of the /var/log/mysql/error.log I initialized, it says that there is a state receive error, and will never receive state. The output looks like this:

130806 10:10:15 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql  130806 10:10:15 mysqld_safe WSREP: Running position recovery with --log_error=/tmp/tmp.uZwsHWfH6y  130806 10:10:17 mysqld_safe WSREP: Recovered position 00000000-0000-0000-0000-000000000000:-1  130806 10:10:17 [Warning] option 'general_log': boolean value '/var/log/mysql/mysqld.log' wasn't recognized. Set to OFF.  130806 10:10:17 [Warning] option 'slow_query_log': boolean value '/var/log/mysql-slow-queries.log' wasn't recognized. Set to OFF.  130806 10:10:17 [Note] WSREP: wsrep_start_position var submitted: '00000000-0000-0000-0000-000000000000:-1'  130806 10:10:17 InnoDB: The InnoDB memory heap is disabled  130806 10:10:17 InnoDB: Mutexes and rw_locks use GCC atomic builtins  130806 10:10:17 InnoDB: Compressed tables use zlib 1.2.3  130806 10:10:17 InnoDB: Using Linux native AIO  130806 10:10:17 InnoDB: Initializing buffer pool, size = 128.0M  130806 10:10:17 InnoDB: Completed initialization of buffer pool  130806 10:10:17 InnoDB: highest supported file format is Barracuda.  130806 10:10:17  InnoDB: Waiting for the background threads to start  130806 10:10:18 Percona XtraDB (http://www.percona.com) 1.1.8-29.3 started; log sequence number 1598129  130806 10:10:18 [Note] Plugin 'FEEDBACK' is disabled.  130806 10:10:18 [Note] Event Scheduler: Loaded 0 events  130806 10:10:18 [Note] WSREP: Read nil XID from storage engines, skipping position init  130806 10:10:18 [Note] WSREP: wsrep_load(): loading provider library '/usr/lib64/galera/libgalera_smm.so'  130806 10:10:18 [Note] WSREP: wsrep_load(): Galera 23.2.4(r147) by Codership Oy <info@codership.com> loaded succesfully.  130806 10:10:18 [Note] WSREP: Found saved state: 00000000-0000-0000-0000-000000000000:-1  130806 10:10:18 [Note] WSREP: Reusing existing '/var/lib/mysql//galera.cache'.  130806 10:10:18 [Note] WSREP: Passing config to GCS: base_host = 10.162.111.109; base_port = 4567; cert.log_conflicts = no; gcache.dir = /var/lib/mysql/; gcache.keep_pages_size = 0; gcache.mem_size = 0; gcache.name = /var/lib/mysql//galera.cache; gcache.page_size = 128M; gcache.size = 128M; gcs.fc_debug = 0; gcs.fc_factor = 1; gcs.fc_limit = 16; gcs.fc_master_slave = NO; gcs.max_packet_size = 64500; gcs.max_throttle = 0.25; gcs.recv_q_hard_limit = 9223372036854775807; gcs.recv_q_soft_limit = 0.25; gcs.sync_donor = NO; replicator.causal_read_timeout = PT30S; replicator.commit_order = 3  130806 10:10:18 [Note] WSREP: Assign initial position for certification: -1, protocol version: -1  130806 10:10:18 [Note] WSREP: Start replication  130806 10:10:18 [Note] WSREP: Setting initial position to 00000000-0000-0000-0000-000000000000:-1  130806 10:10:18 [Note] WSREP: protonet asio version 0  130806 10:10:18 [Note] WSREP: backend: asio  130806 10:10:18 [Note] WSREP: GMCast version 0  130806 10:10:18 [Note] WSREP: (4e646cee-feaa-11e2-0800-10aa5e70a57b, 'tcp://0.0.0.0:4567') listening at tcp://0.0.0.0:4567  130806 10:10:18 [Note] WSREP: (4e646cee-feaa-11e2-0800-10aa5e70a57b, 'tcp://0.0.0.0:4567') multicast: , ttl: 1  130806 10:10:18 [Note] WSREP: EVS version 0  130806 10:10:18 [Note] WSREP: PC version 0  130806 10:10:18 [Note] WSREP: gcomm: connecting to group 'my_wsrep_cluster', peer 'zabbixcrt02:'  130806 10:10:19 [Note] WSREP: declaring 21415a01-fea8-11e2-0800-7061deb24ae4 stable  130806 10:10:19 [Note] WSREP: Node 21415a01-fea8-11e2-0800-7061deb24ae4 state prim  130806 10:10:19 [Note] WSREP: view(view_id(PRIM,21415a01-fea8-11e2-0800-7061deb24ae4,8) memb {          21415a01-fea8-11e2-0800-7061deb24ae4,          4e646cee-feaa-11e2-0800-10aa5e70a57b,  } joined {  } left {  } partitioned {  })  130806 10:10:19 [Note] WSREP: gcomm: connected  130806 10:10:19 [Note] WSREP: Changing maximum packet size to 64500, resulting msg size: 32636  130806 10:10:19 [Note] WSREP: Shifting CLOSED -> OPEN (TO: 0)  130806 10:10:19 [Note] WSREP: Opened channel 'my_wsrep_cluster'  130806 10:10:19 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = no, my_idx = 1, memb_num = 2  130806 10:10:19 [Note] WSREP: STATE EXCHANGE: Waiting for state UUID.  130806 10:10:19 [Note] /usr/sbin/mysqld: ready for connections.  Version: '5.5.29-MariaDB'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MariaDB Server, wsrep_23.7.3.rXXXX  130806 10:10:19 [Note] WSREP: STATE EXCHANGE: sent state msg: 4eb2dd53-feaa-11e2-0800-5d7a774f5dbf  130806 10:10:19 [Note] WSREP: STATE EXCHANGE: got state msg: 4eb2dd53-feaa-11e2-0800-5d7a774f5dbf from 0 (ceszabbixcrt02)  130806 10:10:19 [Note] WSREP: STATE EXCHANGE: got state msg: 4eb2dd53-feaa-11e2-0800-5d7a774f5dbf from 1 (ceszabbixcrt03)  130806 10:10:19 [Note] WSREP: Quorum results:          version    = 2,          component  = PRIMARY,          conf_id    = 7,          members    = 1/2 (joined/total),          act_id     = 0,          last_appl. = -1,          protocols  = 0/4/2 (gcs/repl/appl),          group UUID = bcb32946-fea7-11e2-0800-32db11e867f1  130806 10:10:19 [Note] WSREP: Flow-control interval: [23, 23]  130806 10:10:19 [Note] WSREP: Shifting OPEN -> PRIMARY (TO: 0)  130806 10:10:19 [Note] WSREP: State transfer required:          Group state: bcb32946-fea7-11e2-0800-32db11e867f1:0          Local state: 00000000-0000-0000-0000-000000000000:-1  130806 10:10:19 [Note] WSREP: New cluster view: global state: bcb32946-fea7-11e2-0800-32db11e867f1:0, view# 8: Primary, number of nodes: 2, my index: 1, protocol version 2  130806 10:10:19 [Warning] WSREP: Gap in state sequence. Need state transfer.  130806 10:10:21 [Note] WSREP: Prepared SST request: mysqldump|10.162.111.109:3306  130806 10:10:21 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.  130806 10:10:21 [Note] WSREP: Assign initial position for certification: 0, protocol version: 2  130806 10:10:21 [Warning] WSREP: Failed to prepare for incremental state transfer: Local state UUID (00000000-0000-0000-0000-000000000000) does not match group state UUID (bcb32946-fea7-11e2-0800-32db11e867f1): 1 (Operation not permitted) at galera/src/replicator_str.cpp:prepare_for_IST():442. IST will be unavailable.  130806 10:10:21 [Note] WSREP: Node 1 (zabbixcrt03) requested state transfer from '*any*'. Selected 0 (zabbixcrt02)(SYNCED) as donor.  130806 10:10:21 [Note] WSREP: Shifting PRIMARY -> JOINER (TO: 0)  130806 10:10:21 [Note] WSREP: Requesting state transfer: success, donor: 0  130806 10:10:24 [Warning] WSREP: 0 (zabbixcrt02): State transfer to 1 (zabbixcrt03) failed: -2 (No such file or directory)  130806 10:10:24 [ERROR] WSREP: gcs/src/gcs_group.c:gcs_group_handle_join_msg():719: Will never receive state. Need to abort.  130806 10:10:24 [Note] WSREP: gcomm: terminating thread  130806 10:10:24 [Note] WSREP: gcomm: joining thread  130806 10:10:24 [Note] WSREP: gcomm: closing backend  130806 10:10:25 [Note] WSREP: view(view_id(NON_PRIM,21415a01-fea8-11e2-0800-7061deb24ae4,8) memb {          4e646cee-feaa-11e2-0800-10aa5e70a57b,  } joined {  } left {  } partitioned {          21415a01-fea8-11e2-0800-7061deb24ae4,  })  130806 10:10:25 [Note] WSREP: view((empty))  130806 10:10:25 [Note] WSREP: gcomm: closed  130806 10:10:25 [Note] WSREP: /usr/sbin/mysqld: Terminated.  130806 10:10:25 mysqld_safe Number of processes running now: 0  130806 10:10:25 mysqld_safe WSREP: not restarting wsrep node automatically  130806 10:10:25 mysqld_safe mysqld from pid file /var/lib/mysql/zabbixcrt03.pid ended  

I'm not sure why this is happening, or what it means. I see it can connect, but there is something that is not transferring from the Master to the Slave. What should I look for/at or do?

I also have made sure that /var/lib/mysql/ was owned by mysql:mysql and it is. the permissions on the directory are: 755

Error: "Storage Engine for the Table Doesn't Support Nullable Columns" (SequelPro)

Posted: 12 Aug 2013 02:00 PM PDT

I'm trying to load a very normal .csv file (that was created from Excel 2011 for Mac) into SequelPro (using MySQL) with my Mac -- and I've recently started getting this error consistently. Can anybody let me know what it is and how to fix it?

An error occurred while trying to add the new table 'wblist' by    CREATE TABLE `wblist` (  `FILE` VARCHAR(255),   `FIRSTNAME` VARCHAR(255),   `MIDDLE` VARCHAR(255),   `LASTNAME` VARCHAR(255),   `FULLNAME` VARCHAR(255),   `GENDER` VARCHAR(255),   `ADDRESS` VARCHAR(255),   `CITY` VARCHAR(255),   `STATE` VARCHAR(255),   `ZIP` VARCHAR(255),   `PHONE` BIGINT(11),   `UNIT` VARCHAR(255),   `JOB` VARCHAR(255),   `AREA` VARCHAR(255),   `TIME` VARCHAR(255),   `MAILINGADDRESS` VARCHAR(255),   `MAILINGCITY` VARCHAR(255),   `MAILINGSTATE` VARCHAR(255),   `MAILINGZIP` VARCHAR(255),   `ID` BIGINT(11),   `CONFIDENCE` VARCHAR(255),   `BIRTHDATE` VARCHAR(255),   `AGE` INT(11),   `RACE` VARCHAR(255),   `ETHNICITY` VARCHAR(255),   `RELIGION` VARCHAR(255),   `PARTY` VARCHAR(255),   `REGISTRATIONDATE` VARCHAR(255),   `VOTERSTATUS` VARCHAR(255),   `OtherPhone` VARCHAR(255),   `POSSIBLEADDRESS` VARCHAR(255),   `POSSIBLEMAILADDRESS` VARCHAR(255),   `RECID` VARCHAR(255)) ENGINE=CSV;    MySQL said: The storage engine for the table doesn't support nullable columns  

This is stopping me before I'm able to import the table. Thanks for the help!

optimizing MySQL for traffic analytics system

Posted: 12 Aug 2013 08:01 PM PDT

background :

I've developed a URL shortener system like Bitly with same features , so the system also tracks clickers info and represent as graphs to the person who has shorten the link as analytics data. currently I'm using MySQL and have a table to store click info with this schema:

visit_id (int)  ip (int)  date (datetime)  country  browser  device  os  referrer (varchar)  url_id (int)  //as foreign key to the shortened URL  

and for now , just the url_id field has index

The system should represent click analytics in the time periods the user wants, for example past hour, past 24 hours , the past month , ...

for example to generate graphs for past month , I do following queries:

SELECT all DAY(date) AS period, COUNT( * )                           FROM (                            SELECT *                           FROM visits                          WHERE url_id =  '$url_id'                          ) AS URL                          WHERE DATE > DATE_SUB( CURRENT_TIMESTAMP( ) , INTERVAL 1 MONTH )                           GROUP BY DAY( DATE )    //another query to display clicker browsers in this period  //another query to display clicker countries in this period  // ...  

issues:

  • for a shortened link with about 500,000 clicks , it takes about 3-4 seconds to calculate just the first query , so for total queries about 10-12 seconds which is terrible.
  • lots of memory and CPU is needed to run such queries

questions :

1- how to improve and optimize the structure , so the analytics of high traffic links will be shown in less than 1 second(like bitly and similar web apps) and with less usage of CPU and RAM ? should I make an index on the fields date, country, browser, device, os, referrer ? if yes , how to do that for the field date because I should group clicks some times by DAY(date), sometimes by HOUR(date), sometimes by MINUTE(date) and ...

2- is MySQL suitable for this application? assume at maximum my application should handle 100 million links and 10 billion clicks on them totally. Should I consider switching to an NoSQL solution for example?

3- if MySQL is ok , is my database design and table structure proper and well designed for my application needs? or you have better recommendations and suggestions?

UPDATE: I made an index on column referrer but it didn't help at all and also damaged the performance and I think that's because of the low cardinality of this column (also others) and the big resulting index size related to the RAM of my server.

I think making index on these columns would not help to solve my problem, my idea is about one of these:

1- if using MySQL, maybe generating statistics using background processing for high traffic links is better instead of calculating lively at the user request.

2- using some caching solution like memcached to help MySQL with high traffic links.

3- using a NoSQL such as MongoDB and solutions like Map-Reduce which I am poorly familiar with and haven't used ever.

what do you think?

LDAP in SQL Server 2012

Posted: 12 Aug 2013 06:01 PM PDT

In our current system, we query the LDAP using SQL Server 2008R2 but are moving to SQL Server 2012 where we have hit a snag. According to http://msdn.microsoft.com/en-us/library/cc707782.aspx one of the discontinued features of SQL Server 2012 is the AD helper service.

Does anyone know of a way to query the AD from SQL Server 2012?

Primary replica set server goes secondary after secondary fails

Posted: 12 Aug 2013 05:01 PM PDT

I have a 2 servers replica set that, after the secondary fails the primary goes into secondary mode while the secondary is in STARTUP2 (recovering). The problem with this is that I can't use the collection stored in that replica set freely, I'm getting errors trying to use the collection:

pymongo.errors.OperationFailure: database error: ReplicaSetMonitor no master found for set: rs2  

Sometimes if I restart the mongod instances, the server rs2-1 is the primary for a while, but after some time (while the secondary is recovering) I see this in the logs of rs2-1 (the primary):

Tue May  7 17:43:40.677 [rsHealthPoll] replSet member XXX.XXX.XXX.XXX:27017 is now in state DOWN  Tue May  7 17:43:40.677 [rsMgr] can't see a majority of the set, relinquishing primary  Tue May  7 17:43:40.682 [rsMgr] replSet relinquishing primary state  Tue May  7 17:43:40.682 [rsMgr] replSet SECONDARY  Tue May  7 17:43:40.682 [rsMgr] replSet closing client sockets after relinquishing primary  

Is there an easy way to make the primary keep being primary after the secondary fails? Am I doing something wrong?

Thanks in advance!

MySQL backup InnoDB

Posted: 12 Aug 2013 01:00 PM PDT

I have a VoIP server running 24x7. At low peak hour at lease 150+ users are connected. My server has MySQL running with InnoDB engine on Windows 2008 platform. I like to take at least 2 times full database backup without shutting down my service.

As per Peter Zaitsev - the founder of percona, mysqldump –single-transaction is not always good.

read here if you are interested

As I'm not a DBA, I like to know in my scenario, which would be best solution to take a database backup?

Thanks,

Strange characters in mysqlbinlog output

Posted: 12 Aug 2013 11:00 AM PDT

Has anyone experienced this? Data replicates fine but when output in mysqlbinlog there are hidden characters that break the input?

  • mysqlbinlog Ver 3.3 for Linux at x86_64
  • mysql 5.5.28 server

Thanks! Julie

Connecting to a SQL Server database from a Flash program

Posted: 12 Aug 2013 12:00 PM PDT

I currently have the ability to utilize Microsoft SQL Server 2012. I am developing a project with Adobe Flash Builder 4.7.

If I link my database with Adobe Flash Builder is there any additional steps I must take in order to make the database live, or as long as my computer is running will this database be accessible from any device that is utilizing it?

In other words is this a LAN only system or does it automatically make itself available for the programs I link to it?

Oracle Express edition on Ubuntu - control file missing

Posted: 12 Aug 2013 07:01 PM PDT

I have installed the Oracle Express edition on Ubuntu as mentioned here.

I am facing issues when I try to create a sample table.

Started oracle

$ sudo service oracle-xe start   Starting Oracle Database 11g Express Edition instance.  

Started sqlplus

$ sqlplus / as sysdba  

Executed the CREATE command

SQL> CREATE TABLE SAMPLE (ID NUMBER);  CREATE TABLE SAMPLE (ID NUMBER)  *** ERROR at line 1: ORA-01109: database not open**  

After a series of research on web, I tried to shutdown and restart oracle:

Shutdown command

SQL> shutdown  ORA-01507: database not mounted  ORACLE instance shut down.  

Started the oracle instance

SQL> startup    ORACLE instance started.  Total System Global Area  688959488 bytes Fixed Size                   2229688 bytes Variable Size             411044424 bytes Database  Buffers          272629760 bytes Redo Buffers                3055616  bytes ORA-00205: error in identifying control file, check alert log  for more info  

I realized that the control file is missing at /u01/app/oracle/oradata/XE. XE is the DB name.

So I tried to create the control file as follows:

SQL> CREATE CONTROlFILE SET DATABASE XE RESETLOGS;    Control file created.  

Tried to create the sample table again

SQL> CREATE TABLE SAMPLE(ID NUMBER);      CREATE TABLE SAMPLE(ID NUMBER)    ERROR at line 1: ORA-01109: database not open  

So I tried to issue the following command

SQL> ALTER DATABASE OPEN RESETLOGS;  ALTER DATABASE OPEN RESETLOGS    ERROR at line 1:  ORA-01194: file 1 needs more recovery to be consistent**  ORA-01110: data file 1: '/u01/app/oracle/product/11.2.0/xe/dbs/dbs1XE.dbf'  

What should be done next? I am clueless as I am not a database guy.

Note:

Output of

$ lsnrctl services    LSNRCTL for Linux: Version 11.2.0.2.0 - Production on 04-JAN-2013 09:15:37    Copyright (c) 1991, 2011, Oracle.  All rights reserved.    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC_FOR_XE)))    Services Summary...    Service "PLSExtProc" has 1 instance(s).   Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...      Handler(s):        "DEDICATED" established:0 refused:0           LOCAL SERVER  Service "XE" has 1 instance(s).    Instance "XE", status READY, has 1 handler(s) for this service...      Handler(s):        "DEDICATED" established:0 refused:0 state:ready           LOCAL SERVER  The command completed successfully  

Is using multiple foreign keys separated by commas wrong, and if so, why?

Posted: 12 Aug 2013 04:11 PM PDT

There are two tables: Deal and DealCategories. One deal can have many deal categories.

So the proper way should be to make a table called DealCategories with the following structure:

DealCategoryId (PK)  DealId (FK)  DealCategoryId (FK)  

However, our outsource team stored the multiple categories in the Deal table this way:

DealId (PK)  DealCategory -- In here they store multiple deal ids separated by commas like this: 18,25,32.  

I feel that what they did is wrong, but I don't know how to clearly explain why this is not right.

How should I explain to them that this is wrong? Or maybe I'm the one who's wrong and this is acceptable?

No comments:

Post a Comment

Search This Blog