Saturday, September 14, 2013

[how to] How to update a table by count from another table?

[how to] How to update a table by count from another table?


How to update a table by count from another table?

Posted: 14 Sep 2013 07:55 PM PDT

I have two tables as

CREATE TABLE country_statistics  (  id int(11) NOT NULL AUTO_INCREMENT  us int(11),  uk int(11),  ca int(11),  PRIMARY KEY(id)  ) ENGINE=InnoDB DEFAULT    CREATE TABLE entries  (  id int(11) NOT NULL AUTO_INCREMENT  list varchar(255),  PRIMARY KEY(id)  ) ENGINE=InnoDB DEFAULT  

I want to search for the occurrence of each country in the list column. I use multiple subqueries to do so

UPDATE country_statistics  us = (SELECT COUNT(*) FROM entries WHERE list LIKE '%United States%',  uk = (SELECT COUNT(*) FROM entries WHERE list LIKE '%United Kingdom%',  ca = (SELECT COUNT(*) FROM entries WHERE list LIKE '%Canada%'  WHERE id = 'xx'  

How can I reduce the number of subqueries, and count all values in one subquery, then updating all columns?

Multiple values for unique id?

Posted: 14 Sep 2013 08:08 PM PDT

I have two tables: customer and boxes in one-to-many relationship. The structure of the tables is in addition :

Customer table :   id#| name | date_purchased | address |                      1    John     2012-9-10       N/A    Boxes table:   id#| type | quantity |                  1    big      4   

The id of the boxes table is foreign key to the customer table.

What if John has purchased more than 4 big boxes ? For example: he has ordered 2small boxes as well. How to add this to the database design schema?

Database with 3 columns or 20 columns?

Posted: 14 Sep 2013 05:49 PM PDT

I have a database containing client informations. Every client is described by 20 fields A1, ... A20 containing a numeric value.

Currently, the database has the following structure client name | field | value, where every client is described by 20 entries in the database, (one for each field A1...A20). This doesn't seem to be optimal.

Ideally, I should reshape it in the following way client name | A1 | ... | A20.

However, there is a problem : with the time, some fields A1...A20 will become obsolete and I will add new fields. Hence, in 3 years I may end up with 40 columns instead of 20. Is it still a wise idea to reshape the database with (at least now) 20 columns ?

How do I query a table with many trees of data, for one tree?

Posted: 14 Sep 2013 03:16 PM PDT

I have a the NAICS schema loaded and all is working well. However I'm confused as how I'd query it in Postgrseql?

The table looks something like this,

=========NAICS_2012=========  naics | naics_parent | title  ----------------------------  

This is what I've wrote,

WITH RECURSIVE search_graph(naics12, naics12_parent, title, depth, path_info) AS (          SELECT naics12, naics12_parent, title, 1, array[naics12]          FROM naics.codes          WHERE naics12_parent IS NULL  UNION ALL          SELECT c.naics12, c.naics12_parent, c.title, sg.depth+1, sg.path_info||c.naics12          FROM naics.codes AS c, search_graph AS sg          WHERE c.naics12_parent = sg.naics12  )  SELECT * FROM search_graph ORDER BY path_info;  

This produces output like this (be sure to scroll to the end)

 naics12 | naics12_parent |                                                         title                                                          | depth |         path_info            ---------+----------------+------------------------------------------------------------------------------------------------------------------------+-------+----------------------------   11      |                | Agriculture, Forestry, Fishing and Hunting                                                                             |     1 | {11}   111     | 11             | Crop Production                                                                                                        |     2 | {11,111}   1111    | 111            | Oilseed and Grain Farming                                                                                              |     3 | {11,111,1111}   11111   | 1111           | Soybean Farming                                                                                                        |     4 | {11,111,1111,11111}   111110  | 11111          | Soybean Farming                                                                                                        |     5 | {11,111,1111,11111,111110}   11112   | 1111           | Oilseed (except Soybean) Farming                                                                                       |     4 | {11,111,1111,11112}   111120  | 11112          | Oilseed (except Soybean) Farming                                                                                       |     5 | {11,111,1111,11112,111120}   11113   | 1111           | Dry Pea and Bean Farming                                                                                               |     4 | {11,111,1111,11113}   111130  | 11113          | Dry Pea and Bean Farming                                                                                               |     5 | {11,111,1111,11113,111130}   11114   | 1111           | Wheat Farming                                                                                                          |     4 | {11,111,1111,11114}   111140  | 11114          | Wheat Farming                                                                                                          |     5 | {11,111,1111,11114,111140}   11115   | 1111           | Corn Farming                                                                                                           |     4 | {11,111,1111,11115}   111150  | 11115          | Corn Farming                                                                                                           |     5 | {11,111,1111,11115,111150}   11116   | 1111           | Rice Farming                                                                                                           |     4 | {11,111,1111,11116}   111160  | 11116          | Rice Farming                                                                                                           |     5 | {11,111,1111,11116,111160}   11119   | 1111           | Other Grain Farming                                                                                                    |     4 | {11,111,1111,11119}   111191  | 11119          | Oilseed and Grain Combination Farming                                                                                  |     5 | {11,111,1111,11119,111191}   111199  | 11119          | All Other Grain Farming                                                                                                |     5 | {11,111,1111,11119,111199}   1112    | 111            | Vegetable and Melon Farming                                                                                            |     3 | {11,111,1112}   11121   | 1112           | Vegetable and Melon Farming                                                                                            |     4 | {11,111,1112,11121}   111211  | 11121          | Potato Farming                                                                                                         |     5 | {11,111,1112,11121,111211}   111219  | 11121          | Other Vegetable (except Potato) and Melon Farming                                                                      |     5 | {11,111,1112,11121,111219}   1113    | 111            | Fruit and Tree Nut Farming                                                                                             |     3 | {11,111,1113}   11131   | 1113           | Orange Groves                                                                                                          |     4 | {11,111,1113,11131}   111310  | 11131          | Orange Groves                                                                                                          |     5 | {11,111,1113,11131,111310}  

However, what I want is to see one tree, so I can write a query like,

SELECT * FROM search_graph WHERE naics12 = <foo>;  

And, get a tableset back as if the table was only preloaded with the parents of the supplied NAICS12 code.

Any idea on how on further manipulate the data to get this kind of result out?

Is it possible to disable replication for a specific database in RDS?

Posted: 14 Sep 2013 10:48 AM PDT

My application uses a mechanism of SQL patches as part of the upgrade to a new version.

Before running an upgrade, I need to try to run the SQL patches on a copy of the database to ensure they complete without error, before running them on the actual db.

I can do it on RDS, by creating a temporary DB, running the patches on it, and dropping it thereafter.

But ideally, I'd want to do so without having the temporary database written to the binlog for backup and replication.

Is it possible to mark a specific database as non-replicated in an RDS instance?

My other option is to create a temporary RDS instance just for testing the upgrade, but that's a bit overkill and takes extra time to complete.

show innodb engine status tuning db server

Posted: 14 Sep 2013 10:24 AM PDT

I am new to dealing with the innodb engine status. I am not too sure with I/O section. As below is sample of my output.

FILE I/O\n--------\n  I/O thread 0 state: waiting for i/o request (insert buffer thread)\n  I/O thread 1 state: waiting for i/o request (log thread)\n  I/O thread 2 state: waiting for i/o request (read thread)\n  I/O thread 3 state: waiting for i/o request (write thread)\n  Pending normal aio reads: 0, aio writes: 0,\n ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0\n  Pending flushes (fsync) log: 0; buffer pool: 0\n9705 OS file reads, 16082 OS file writes, 8524 OS fsyncs\n  0.00 reads/s, 0 avg bytes/read, 5.14 writes/s, 2.00 fsyncs/s\n-------------------------------------\n  

I also notice a number of

TRANSACTION 0 6929906, not started, process no 1402, OS thread id 140010566186752\n  MySQL thread id 28, query id 475745 192.168.100.11 user1\n--------\n  

Why is it having so many transaction not started? What else can I look from this show engine to tune my db server?

Restore Huge Mysql database file

Posted: 14 Sep 2013 10:19 AM PDT

i am trying to restore 18 G.B Mysql .sql database file in linux server

what is the values should be in /etc/my.cnf in file so may increase the fast restoration

Getting distinct list of records based on the MAX of a column

Posted: 14 Sep 2013 01:27 PM PDT

I have a status table (tblTestActionStatus) which has three columns

ID_TestAction: references some test action

ID_Status: a look up table reference for the variety of possible statuses for the action

StatusDateTime: a datetime field which logs the exact time a status update was made for the test action.

So while a test action is happening, periodic status updates are made.

I am interested to know the current status which would be the last status update of a test action as of the moment I run the query. For example, test action # 100 has gotten 4 status changes so far. The first was 1, then 2, then 3, and most recently 4. And test action # 101 has gotten 3 status updates so far. So I would like to write a query that returns for me all the columns in the table but for only the most recent StatusDateTimes.

I attached a pic which shows the table contents and the rows I would like to see coming back from the query highlighted.

enter image description here

I was able to find an answer with some more searching in the archive.

http://stackoverflow.com/questions/1049702/create-a-sql-query-to-retrieve-most-recent-records

So the SQL I created from that post which works in my case is as follows...

SELECT tblTestActionStatus.ID_TestAction, StatusDateTime, ID_Status   FROM tblTestActionStatus   INNER JOIN      (           Select MAX(StatusDateTime) as LatestDate, ID_TestAction          FROM tblTestActionStatus           Group By ID_TestAction      ) SubMax   on tblTestActionStatus.StatusDateTime = SubMax.LatestDate  and tblTestActionStatus.ID_TestAction = SubMax.ID_TestAction  

SQL server 2008 enterprise RESEED to 0 anormally

Posted: 14 Sep 2013 09:29 AM PDT

I am using EF 5.0, and sometimes no one can register on server by same error, PK PK_Invalid, and the PK is AUTO INCREMENT, this jumped 1000 to 10000 sometimes, but now this reseting to 0 and try subscribe a exists id. Why? How prevent?

NOTE

To fix I am using DBCC CHECKIDENT (User, reseed, 1080002).

database table schema help reservation to items to products

Posted: 14 Sep 2013 09:23 AM PDT

I'm trying to get a decent structure for a database for an app and want to ensure I'm using best practice and get the right relationships. I'm not sure if I have a one to many relationship or a many to many...

I have a table 'reservations' that receives reservations. I have another table called 'Items' that stores products ordered that belong to the reservation and includes options, qty etc. So assumed the relationship between reservation and items is a one to many.

I then have another table called Products which has specifics about the items available. I've assumed this has a one to one relationship with Items.

Is this the best approach to use or are there better ways I should learn/study.

Are my understanding of the relationships correct or is the 'items' table simply a link table for a many to many relationship between reservation and products

advice on approach and best practice appreciated

database is a mysql

Thanks

How to optimize this query more?

Posted: 14 Sep 2013 07:39 AM PDT

First at all I am nood into SQL thing, Now I am working on a class project where I have some tables like

Table user

user_id  |  username | name      1     |    nihan  |  Nihan Dip     2     |     dip   |  Meaw ghew    more   |  more     | more  

Table Friend

you    | friend_id     1     |    2   1     |    27   2     |    9   more  |   more  

Table Follow

user_id   |  follows    1       |   99    7       |   34  

Table post

post_id   |  user_id  | type  |  content  | post_time    1       |   1       |  text | loren toren | timestamp    2       |   2       | text  | ipsum       | timestamp  

Now I want to get post by users friend and who he follows and offcourse his so I made this SQL

SELECT       username, name,content, post_time  FROM      post          INNER JOIN      user ON user.user_id = post.user_id  WHERE      post.user_id IN (SELECT               friend_id          FROM              friend          WHERE              you = 1 union all SELECT               follows          FROM              follow          WHERE              user_id = 1)          OR post.user_id = 1  ORDER BY post_time DESC  LIMIT 10  

this query works just fine. I just wanted to know is there anymore optimization could be done? Then how?

How to Restore Emails from Exchange Server 2007 [on hold]

Posted: 14 Sep 2013 07:42 AM PDT

My Exchange server 2007 database get corrupt and now i am unable to read EDB files emails, Please suggest me solution.

Thanks

Drop an index in postgresql

Posted: 14 Sep 2013 07:33 PM PDT

I had created an index wrongly and now im trying to drop that index. Since the datas are large for my table, dropping index is taking lot of time. Is there any other way to drop the index quickly?

Thanks, Karthika

Table redirect / filter / trigger on select

Posted: 14 Sep 2013 09:20 AM PDT

Is there any way to redirect queries to different tables / views based on the package that's referencing the table? I.e. packages A and B both have "select grade from schema1.grd_tbl", but I want package A to get the percent grade that's stored in the table, and package B to get a letter grade that's calculated from the percent. I'd like to avoid modifying the (dozens of) packages that reference the table. I'd rather 'spoof' the table somehow if we can, replacing the percent in the grade column with a letter when called from those packages. (The column's varchar2; percents are coded as characters.)

First thought was to create a synonym pointing to a view that massages the column based on the calling package, but the code fully qualifies the table name in most cases so that doesn't seem doable. I went looking for something equivalent to a trigger on select; the closest I've found is fine-grained audit, and it's not row-based.

Is there magic somewhere that might let me do this? Any hints appreciated.

Perry.

Modeling a database for easy counting / reporting

Posted: 14 Sep 2013 08:20 PM PDT

I have an app where user is known (user_id) and he can do several actions (action_id). Every time he makes an action I need to save the fact that he made it for reports/analytics. I guess it is similar to other analytic solutions and their db design.

Once I have the data, provided with a time window (minutes resolution) I need to count for each user (all or some) the number of times he did actions and which actions he did. (sum all data grouped by action_id).

Some assumptions:

  • The number of users are ~1000.
  • Action types are ~100.
  • Actions can happen 24/7.
  • The time windows can span from minutes to days and are random.
  • A time window can't go back more than 30 days.

I'm considering SQL, NoSQL and RRD to save the data.

I put RRD here because it's easy to implement the insert of the data into statds+graphite. I'm concerned if I take this approach, the querying (although provided by graphite) will not be indexed and will probably have to count all the data whenever I ask for a window/user (no indexing). Another problem is that when querying all the data, all users info will be needed, resulting in reading all the files concurrently which I'm not sure is a good thing.

SQL - Very easy implementation when inserting the data and querying. Easy to index, order and group by. However I'm not sure it's easy if I'm anticipating high traffic. Also, I'm not sure how effective is the count() of sql (haven't used SQL in the last few years) after group by. Can it offer parallel computation?

NoSQL - Is there a solution out there that is the right fit for this type of scenario (perhaps a Map/Reduce algorithm to fast generation of counts in a time window?)

Thanks for helping me model

Putting a Select statement in a transaction

Posted: 14 Sep 2013 01:20 PM PDT

What is the difference between these 2 queries:

start transaction;  select * From orders Where id=1;  UPDATE orders SET username="John" Where id=1;  commit;  

And without transaction:

select * From orders Where id=1;  UPDATE orders SET username="John" Where id=1;    

What is the effect of having a SELECT inside a transaction?

If Delete From orders Where id=1 was called from another session right after the Select in both cases, when will it be processed?

Replication randomly started failing

Posted: 14 Sep 2013 08:20 AM PDT

I have replication set up between two servers. This morning I came in and it was failing. I was getting these errors:

Error messages:

  • The process could not execute 'sp_replcmds' on 'PSQL3'. (Source: MSSQL_REPL, Error number: MSSQL_REPL20011) Get help:
  • Cannot execute as the database principal because the principal "dbo" does not exist, this type of principal cannot be impersonated, or you do not have permission. (Source: MSSQLServer, Error number: 15517)
  • The process could not execute 'sp_replcmds' on 'PSQL3'. (Source: MSSQL_REPL, Error number: MSSQL_REPL22037)

Problem compiling view when it is referencing a table in an other view: insufficient privileges

Posted: 14 Sep 2013 11:20 AM PDT

Oracle 11g R2 Logged on: SYS / AS SYSDBA

When I try to compile or create a view that is referencing local schema tables. It works fine.

Problem does occur when I try to compile the same view referencing a table in another schema like schema.table in my query.

Oracle throws the exception ORA-01031: insufficient privileges.

Remember I am using SYS account (sysdba).

CONNECT BY using 3 tables(2 DIM and 1 FACT to get the Level and dependencies)

Posted: 14 Sep 2013 05:20 AM PDT

Below is my scenario and any help in this regard would be highly appreciated. I need to get the list of Managers and reporting persons to that manager with LEVEL at ID level. But the Fact table doesnt have hierarchial relationship with itself. Needs to get the id's from key's from 2 different dimension tables to generate the list. I am not familiar with CONNECT_BY and JOINS to get list.

Fact table:

Person_ky    Mngr_ky    100          3    50           6    2000         9999    60           3    150          4    20           2    

In DIM Person table

Person_ky    Pers_Id    100          x    50           y    2000         aaaa    60           a     150          m    20      q     

In DIM Manger table

Mngr_ky    Mngr_id    3           m    6           a    4           q    2           t                                                   999         bbb   

In the above scenario,
pers_ky 100 (Id -'x') is reporting to mngr_ky 3(Mngr_id - 'm') and person_ky 150 ('m') is reporting to mngr_ky 4 ('q') and person_ky 150 ('q') reporting to mngr_ky 2('t').. So Need to have for person x, his managers are 'm','q' and 't' in the result.

pers_id mgr_id level    x       m        1     --> for pers_ky 100    x       q        2    x       t        3    y       a        1     --> for per_ky 60    y       m        2    y       q        3    y       t        4    aaaa    bbb      1     --> for pers_ky 2000    a       m        1     --> for pers_ky 60    a       q        2     .    .    

Connect Apache Http server with Oracle

Posted: 14 Sep 2013 06:20 AM PDT

I have problem with connect ApacheHttpServer with Oracle. All work on OpenSuse 12.3

At first I install Oracle instal client 11.2. Set environment variables:

ORACLE_HOME=/usr/local/instantclient_11_2/  LD_LIBRARY_PATH=$ORACLE_HOME:$LD_LIBRARY_PATH  PATH=$ORACLE_HOME:$PATH  TNS_ADMIN=$ORACLE_HOME/network/admin  export ORACLE_HOME LD_LIBRARY_PATH PATH TNS_ADMIN  

After that I compile apache http :

#apr  ./configure --prefix=/usr/local/apache2/apr  make  make install    #apr-util  ./configure --prefix=/usr/local/apache2/apr-util --with-apr=/usr/local/apache2/apr/  make  make install    #httpd  CPPFLAGS=-IORACLE_HOME/include LDFLAGS=-LORACLE_HOME ./configure --prefix=/usr/local/apache2/httpd --with-apr=/usr/local/apache2/apr/ --with-apr-util=/usr/local/apache2/apr-util/  make  make install   

Unfortunately, the installation was not successful because mod_dbd_oracle.so didn't created. I think I was connected with that:

#./configure output:    checking oci.h usability... yes  checking oci.h presence... yes  checking for oci.h... yes  checking for OCIEnvCreate in -lclntsh... no     setting LIBS to "-lnnz11"  checking for OCIEnvCreate in -lclntsh... no     nulling LIBS     setting LIBS to "-lnnz10"  

I could not find a way how to solve. So I decided to use Oracle with ODBC driver. I install unixODBC 2.2.12-219.1.1.

After that I modife to odbc.ini and odbcinst.ini:

#odbc.ini   [xe]  DRIVER = oraclevb  DSN = xe  ServerName = //192.168.56.101:1521/xe  Application Attributes = T  Attributes = W  BatchAutocommitMode = IfAllSuccessful  BindAsFLOAT = F  CloseCursor = F  DisableDPM = F  DisableMTS = T  EXECSchemaOpt =  EXECSyntax = T  Failover = T  FailoverDelay = 10`enter code here`  FailoverRetryCount = 10  FetchBufferSize = 64000  ForceWCHAR = F  Lobs = T  Longs = T  MaxLargeData = 0  MetadataIdDefault = F  QueryTimeout = T  ResultSets = T  SQLGetData extensions = F  Translation DLL =  Translation Option = 0  DisableRULEHint = T  StatementCache=F  CacheBufferSize=20  UseOCIDescribeAny=F       #odbcinst.ini:  [oraclevb]  Description = Oracle ODBC  Driver = /usr/local/instantclient_11_2/libsqora.so.11.1  Setup =  FileUsage =  CPTimeout =  CPReuse =  

I chcek how it work with command:

isql -v xe user passowrd  

All was OK. So I again compile Apache Http Source. Modify httpd.conf to use ODBC:

LoadModule authn_dbd_module modules/mod_authn_dbd.so  LoadModule authn_core_module modules/mod_authn_core.so  LoadModule authz_host_module modules/mod_authz_host.so  LoadModule authz_user_module modules/mod_authz_user.so  LoadModule authz_dbd_module modules/mod_authz_dbd.so  LoadModule authz_core_module modules/mod_authz_core.so    LoadModule access_compat_module modules/mod_access_compat.so  LoadModule auth_basic_module modules/mod_auth_basic.so  LoadModule dbd_module modules/mod_dbd.so    DBDriver odbc  DBDParams "datasource=xe"    <Directory "/sample">       AuthType Basic        AuthName "private area"        AuthBasicProvider dbd       AuthDBDUserPWQuery "SELECT password FROM USERS WHERE name = %s"       AuthzDBDQuery "SELECT group FROM GROUPS user_name = %s"       Require dbd-group grup1 grup2  </Directory>  

Start Apache HTTTP Server and I found in error logs:

[Sun Jul 14 15:19:07 2013] [dbd_odbc] SQLConnect returned SQL_ERROR (-1) at dbd/apr_dbd_odbc.c:1130 [unixODBC][Driver Manager]Can't open lib '/usr/local/instantclient_11_2/libsqora.so.11.1' : file not found 01000   [Sun Jul 14 15:19:07.332394 2013] [dbd:error] [pid 13374:tid 140250220050240] (20014)Internal error: AH00629: Can't connect to odbc: [dbd_odbc] SQLConnect returned SQL_ERROR (-1) at dbd/apr_dbd_odbc.c:1130 [unixODBC][Driver Manager]Can't open lib '/usr/local/instantclient_11_2/libsqora.so.11.1' : file not found 01000   

I don't know what I can do with that. Any suggestions would be very helpful. Maybe there is another way that conduct authorization and authentication using the Oracle database. Or other Apache module?

representation in ms-access

Posted: 14 Sep 2013 07:20 PM PDT

I have a database in microsoft access. I want to know how to look up a singular datum from a reference table giving a dynamic set of values. Here is a representation of what I mean:

I have the following tables:

Points for Pushups(m):

Reps      Age 17-21,          Age 22-26,         Age 27-31    1            6                  7                    8    2            7                  9                    9    3            9                  11                  12  

Fitness Tests:

Name  Reps   Test Date    Bob      2            1 jan 2009    Jill     1            5 may 2010  

People:

Name         DOB    Bob      1 jan 1987    Jill     2 feb 1985    Sal      3 Mar 1991    

I want the query to use People.DOB and the Test date to find the age the person was during the test. I then want the query to use this value to determine which column to look in, and the value from reps to determine which row to look in coming back with the singular value and naming it points.

for example I want bob to show

Query:

Name      DOB            Age AtTest   Reps      Points    Bob      1 Jan 1987         22         2          9  

Does anyone know how to do the dynamic reference part?

I know how to make the query and I know how to get age I just don't know how to use the values as columns in the reference table, I've seen it done, but long ago and never looked into it.

PostgreSQL: DBLink weird permission/connection error

Posted: 14 Sep 2013 07:40 PM PDT

The goal is to (1) create a connection, (2) return a view of another database, and (3) close the same connection all to be stored in a view (no stored procedure or embedded password). I came up with the following code, which uses a trust account and seems to work, until filtering on a boolean field. Searching for false (field=false) works, but searching for true (field=true) or IS NOT NULL prompts for the password.

It's really odd that the data is being returned w/o needing the password, but when filtering, that is when the password is required.


PostgreSQL: v8.4:

  • There are two databases: db_remote and db_local
  • There are two accounts:
    • trusted_user (set up as trust in pg_hba.conf to both databases)
    • md5_user (set up as md5 in pg_hba.conf to both databases)
  • db_remote has a view (v_sessions), which contains two fields: a text field and a boolean field, which is created with a CASE statement.

From within db_local (as md5_user):

-- Creates okay:  CREATE VIEW v_sessions AS   SELECT * FROM (       select  '1' query_type,'' as username, false as is_logged_in        from    dblink_connect_u('connection', 'host=development dbname=db_remote user=trusted_user')        union        select  '2' query_type, username, is_logged_in        from    dblink('connection', 'select username, is_logged_in from v_sessions') as v_session(username text, is_logged_in boolean)        union        select  '3' query_type,'',false        from    dblink_disconnect('connection')  ) v_sessions   WHERE query_type=2;    -- Calling the view with filter:  SELECT * FROM v_sessions WHERE is_logged_in;  

What works:

  • removing the AND is_logged_in
  • filtering on a text field AND username = 'some value' !!
  • filtering on the boolean field for a false (AND is_logged_in = false) !!

What doesn't work:

  • filtering on the boolean field for a true value: AND is_logged_in IS NOT NULL, AND is_logged_in, or AND is_logged_in = true, gives the following error message:

    ERROR: password is required
    DETAIL: Non-superusers must provide a password in the connection string.


Filtering inside the second query works, but it doesn't help when wanting to store this in a view.


I suppose I'm looking for suggestions, but really trying to understand what is going on eg. why searching for false works and true does not - I first thought the connection was preemptively closing, but it's not.

How to setup SQL active/active cluster to achieve Blue / Green instance switching?

Posted: 14 Sep 2013 05:20 PM PDT

I am wondering if anyone has ever used a multi-instance cluster (nee 'Active/Active') to achieve blue/green (or A/B) deployment scenarios, and what the best way of configuring it is (using SQL 2012 / Windows 2008 R2)?

To be specific, the scenario I want to achieve is to be able to switch between which cluster instance is being connected to by clients without either the clients or the SQL instances knowing (I stress I'm not talking about node failover here). I'm envisaging that the best way to achieve this is something like:

  • Setup 2 node cluster, each of which has InstanceA and InstanceB instances
  • Configure both InstanceA and InstanceB to listen as if they were the default instance on their cluster address (given each instance on a cluster has it's own unique IP)
  • Use DNS to switch which virtual address clients actually connect to.

This should hopefully enable me to do the following:

  • Deploy database to instance A, and have clients connect to it via DNS alias as if default instance
  • Deploy new version of database to instance B
  • Vet new version of database (connecting explicitly to cluster\InstanceB)
  • Redirect DNS alias to point to instance B's cluster name
  • Clients now connect to InstanceB without realising anything's changed
  • Both instances can still failover to the other node in a true outage

Joining the dots, it seems like this should be possible:

... but I've never seen a full example. Has anyone done it? Will what's proposed above work? What have I missed?

Time series data for ad platform

Posted: 14 Sep 2013 12:20 PM PDT

I am trying to figure out how to store time series data for an ad platform I am working on.

Basically I want to know some strategies/solutions for storing billions of rows of data so that I can easily search it (about 6-8 indexes on the table) and get fast counts based on queries.

I tried mySQL with the tokuDB engine and this seems to be very fast but is extremely slow when I try to do a COUNT query when the rows reached about 5-8 million.

I was looking at some noSQL alternatives but since I want to be able to search this data this is probably not the best solution. I was using dynamoDB. I would have had to store the data is many places in order to account for all the searching on the data.

What I am storing is a row in the database for each click on an AD that occurs. This table will grow very fast, especially when this site gets large.

Another solution would be to separate this data per advertiser. This means each advertiser will have their own table where all their data goes into. This means it will be much smaller and the COUNT queries will be much faster. I can even split it up by advertiser and month.

My goal is to give an advertiser the ability to search and display in a paginated way all their clicks. They should be able to get data between a time period and filter by about 5-8 other indexes if they want to.

If an account has REQUIRE SUBJECT, does it still need a password?

Posted: 14 Sep 2013 06:20 PM PDT

I'm in the process of setting up SSL-secured replication between two servers. Each server has its own public/private keypair, and the CA cert is just the concatenation of the two public certs, like this answer.

Now I'm updating the replication account with REQUIRE SUBJECT "exact subject of the client"

Is there any practical value to also having a password on the replication account (IDENTIFIED BY "secret")?

Install MariaDB without password prompt in ubuntu

Posted: 14 Sep 2013 07:41 AM PDT

I need to install mariadb server in ubuntu without passwod prompt. For this purpose, I execute the following commands, but it doesn't gave me any progress. It again shows password prompt.

I used the following commands :

  export DEBIAN_FRONTEND=noninteractive    echo mariadb-server-5.5 mariadb-server/root_password password mypass | debconf-set-selections    echo mariadb-server-5.5 mariadb-server/root_password_again password mypass | debconf-set-selections    sudo apt-get -y install mariadb-server  

I also check some links in stack overflow, but it doesnot worked:

http://stackoverflow.com/questions/8138636/install-mysql-on-ubuntu-natty-without-password-prompt-and-using-shell-variable-a

http://stackoverflow.com/questions/7739645/install-mysql-on-ubuntu-without-password-prompt

Please help me on this regard. Have any wrong with my code?

Converting dbo file from windows to linux

Posted: 14 Sep 2013 10:20 AM PDT

I have a .dbo file which is created from windows. This file is succesfully reloaded into the mysql database in windows. I need to reload the dbo file into the mysql/mariadb database in linux. How I convert the file that was created from windows to linux?

How to change page size in an existing sybase database

Posted: 14 Sep 2013 07:20 AM PDT

I have an existing database with 1 kb page size i want to increase that to 16kb and increase cache to take advantage of my memory for best performance. sybase documentations said you can't change page size of database but you can create new empty database with preferred page size, so how can i move my data to from old db to new database with big page size ?

MySQL : Why are there "test" entries in mysql.db?

Posted: 14 Sep 2013 05:05 PM PDT

Recently, I posted an answer to a question about mysql.db.

Then, I got to thinking I should ask everyone this question:

I have noticed for years that upon installation of MySQL 5.0+, mysql.db is populated with two entries that allow test databases to be accessed by anonymous users.

You can see it by running this query:

mysql> select * from mysql.db where SUBSTR(db,1,4) = 'test'\G  *************************** 1. row ***************************                   Host: %                     Db: test                   User:            Select_priv: Y            Insert_priv: Y            Update_priv: Y            Delete_priv: Y            Create_priv: Y              Drop_priv: Y             Grant_priv: N        References_priv: Y             Index_priv: Y             Alter_priv: Y  Create_tmp_table_priv: Y       Lock_tables_priv: Y       Create_view_priv: Y         Show_view_priv: Y    Create_routine_priv: Y     Alter_routine_priv: N           Execute_priv: N  *************************** 2. row ***************************                   Host: %                     Db: test\_%                   User:            Select_priv: Y            Insert_priv: Y            Update_priv: Y            Delete_priv: Y            Create_priv: Y              Drop_priv: Y             Grant_priv: N        References_priv: Y             Index_priv: Y             Alter_priv: Y  Create_tmp_table_priv: Y       Lock_tables_priv: Y       Create_view_priv: Y         Show_view_priv: Y    Create_routine_priv: Y     Alter_routine_priv: N           Execute_priv: N  2 rows in set (0.00 sec)  

Are these entries in the mysql.db a security risk, and if so, why are they added by default to a new install?

UPDATE 2013-06-14 10:13 EDT

This morning someone downvoted my question, which I truly don't understand. In light of this event, here is why I took the time to make a rebuttal:

I installed MySQL 5.6.12 for a client this week in their Staging Cluster. I decided to check to see if this was still an on-going problem:

mysql> select version();  +------------+  | version()  |  +------------+  | 5.6.12-log |  +------------+  1 row in set (0.00 sec)    mysql> select db,user,host from mysql.db where LEFT(db,4)='test';  +---------+------+------+  | db      | user | host |  +---------+------+------+  | test    |      | %    |  | test\_% |      | %    |  +---------+------+------+  2 rows in set (0.10 sec)    mysql> select now();  +---------------------+  | now()               |  +---------------------+  | 2013-06-14 10:10:13 |  +---------------------+  1 row in set (0.00 sec)    mysql>  

Guess what? This is still a problem even to this day !!!

MORAL OF THE STORY : Please check your mysql.db immediately after installation and remove anonymous logins and erase these test entries from mysql.db without delay.

No comments:

Post a Comment

Search This Blog