Thursday, June 20, 2013

[how to] mongodb replication node stuck at “STARTUP2” with optimeDate as 1970

[how to] mongodb replication node stuck at “STARTUP2” with optimeDate as 1970


mongodb replication node stuck at “STARTUP2” with optimeDate as 1970

Posted: 20 Jun 2013 08:36 PM PDT

i have just setup replica sets with three nodes . the third node is stuck at stateStr STARTUP2 with "optimeDate" : ISODate("1970-01-01T00:00:00Z"). However its showing no error message. Is this alright. On primary rs.status() yeilds

{      "set" : "qdit",      "date" : ISODate("2013-06-18T22:49:41Z"),      "myState" : 1,      "members" : [          {              "_id" : 0,              "name" : "q.example.com:27017",              "health" : 1,              "state" : 1,              "stateStr" : "PRIMARY",              "uptime" : 2940,              "optime" : {                  "t" : 1371593311,                  "i" : 1              },              "optimeDate" : ISODate("2013-06-18T22:08:31Z"),              "self" : true          },          {              "_id" : 1,              "name" : "q1.example.com:27017",              "health" : 1,              "state" : 2,              "stateStr" : "SECONDARY",              "uptime" : 457,              "optime" : {                  "t" : 1371593311,                  "i" : 1              },              "optimeDate" : ISODate("2013-06-18T22:08:31Z"),              "lastHeartbeat" : ISODate("2013-06-18T22:49:40Z"),              "lastHeartbeatRecv" : ISODate("2013-06-18T22:49:40Z"),              "pingMs" : 0,              "syncingTo" : "twitnot.es:27017"          },          {              "_id" : 2,              "name" : "q2.example.com:27017",              "health" : 1,              "state" : 5,              "stateStr" : "STARTUP2",              "uptime" : 300,              "optime" : {                  "t" : 0,                  "i" : 0              },              "optimeDate" : ISODate("1970-01-01T00:00:00Z"),              "lastHeartbeat" : ISODate("2013-06-18T22:49:40Z"),              "lastHeartbeatRecv" : ISODate("2013-06-18T22:49:41Z"),              "pingMs" : 7          }      ],      "ok" : 1  }  

also

db.printSlaveReplicationInfo() on yields

source:   qdit1.queuedit.com:27017       syncedTo: Tue Jun 18 2013 22:08:31 GMT+0000 (UTC)           = 2894 secs ago (0.8hrs)  source:   qdit2.queuedit.com:27017       syncedTo: Thu Jan 01 1970 00:00:00 GMT+0000 (UTC)           = 1371596205 secs ago (380998.95hrs)  

Is this alright. Also how can i test my replication especially the third node

How to convert Postgres from 32 bit to 64 bit

Posted: 20 Jun 2013 09:06 PM PDT

I would like to convert from PG 32 bit to 64 bit. I am testing with pg_upgrade (in several scenarios) but I think it cannot do that.

With pg_upgrade, I can upgrade PG version 8.5 to v.9 (both of them run in 32 bit or 64 bit).

Until now, I haven't found how to change from a 32 bit install to 64 bit. How can this be done?

storing csv files

Posted: 20 Jun 2013 06:51 PM PDT

Does anyone have any recommendations on how I am handling the storage of csv files? Anyone can upload any number of columns, differing column names, and the amount of data doesn't matter. I am validating the data with an application csv parser.

Right now, the data does need to be queried for searching.

I am storing the data in a EAV table, so it would have a column in the database that holds the column name from the CSV and the data for that row.

If the csv file has 10 columns and 10 rows, the database would have 100 rows, so it can get large fairly quickly.

Lastly, I am able to efficiently query the data due to the application building out the query on the fly, it gathers the distinct column names and then uses max if to return the column names with the data, even if there is null data present.

Update primary SQL Server in mirrored environment using transaction logs from old environment

Posted: 20 Jun 2013 08:00 PM PDT

I'm currently migrating a large (~40gb) database to a new environment. The new environment is a HA/mirrored set up with primary, mirror and witness nodes.

I'm backing up the database in the old environment, restoring to primary & mirror, and turning on mirroring. During this time the old environment is still active, and data is changing. When we are ready to go live with the new environment I plan to take another transaction log from the old environment and restore that to the new primary server. Is that possible? Will this be successfully synchronised to the mirror?

Why do my WAL logs start with 000000010000000000000004 instead of 000000010000000000000001?

Posted: 20 Jun 2013 02:38 PM PDT

I could swear that when I first created my cluster a week ago, my logs started with 000000010000000000000001 or perhaps even 000000000000000000000001. Now, if I look in my pg_xlog directory, it begins with this 000000010000000000000004. I have wal_keep_segments = 1024, and there are only currently 70 segments in the directory, so why would it have deleted the first few already? Or, is there something I'm missing / doing wrong?

Note, I'm using Postgres 9.2.4 on Ubuntu 12.04.2 (via Pitt's ppa)

Running functions in parallel

Posted: 20 Jun 2013 04:42 PM PDT

This is for SQL Server 2012.

We have some import processes for FTP files that are picked up and read into a staging table, from there we massage/check the data before moving into production. One of the areas that are causing some issues is dates, some are valid, some are typos, some are just plain gibberish.

I have the following example table(s):

Create Table RawData  (   InsertID int not null,   MangledDateTime1 varchar(10) null,   MangledDateTime2 varchar(10) null,   MangledDateTime3 varchar(10) null  )  

I also have a destination table(say in production)

Create Table FinalData  (    PrimaryKeyID int not null, -- PK constraint here, ident    ForeighKeyID int not null, -- points to InsertID of RawData    ValidDateTime1 SmallDateTime null,    ValidDateTime2 SmallDateTime null,    ValidDateTime3 SmallDateTime null  )  

I insert the following into the RawData table:

 Insert Into RawData(InsertID, MangledDateTime1, MangledDateTime2, MangledDateTime3)   Values(1, '20001010', '20800630', '00000000') -- First is legit, second two are not   Insert Into RawData(InsertID, MangledDateTime1, MangledDateTime2, MangledDateTime3)   Values(1, '20800630', '20130630', '20000000') -- middle is legit, first/third are not   Insert Into RawData(InsertID, MangledDateTime1, MangledDateTime2, MangledDateTime3)   Values(1, '00001010', '00800630', '20130630') -- Last is legit, first two are not  

I wrote a function dbo.CreateDate to address the issue(s). We try to clean the data as best we can (use NULL if we can't), then convert the data to the correct datatype (in this case smalldatetime).

Insert Into FinalData(ForeighKeyID , ValidDateTime1, ValidDateTime2, ValidDateTime3)  Select    InsertID   ,dbo.CreateDate(MangledDateTime1)   ,dbo.CreateDate(MangledDateTime2)   ,dbo.CreateDate(MangledDateTime3)  From RawData  

We're running into some performance problems with functions. I'm wondering if/how they work in parallel.

I'm assuming here that the function CreateDate is being run in parallel as each row inserts. Such that each column/value has it's "own" function and is running at the same time as it inserts.

But I could be wrong, is it running serial over each column in each row as it inserts?

CreateDate() code:

Alter Function dbo.CreateDate  (  @UnformattedString  varchar(12)  )  Returns smalldatetime  As  Begin  Declare @FormattedDate smalldatetime    If(@UnformattedString Is Not Null)  Begin      Declare @MaxSmallDate varchar(8) = '20790606'          -- We got gibberish      If Len(@UnformattedString) = 1      Begin          return null      End        -- To account for date and time      If Len(@UnformattedString) = 12      Begin          Select @UnformattedString = Substring(@UnformattedString, 0,9)      End        If @UnformattedString = '20000000'      Begin          Select @UnformattedSTring = @MaxSmallDate      End        -- Some people are sending us two digit years, won't parse right      If Substring(@UnformattedString,0,3) = '00'      Begin          Select @UnformattedString = Replace(@UnformattedString, '00','20')      End        -- Some people are fat fingering in people born in 18??, so change to 19??      If Substring(@UnformattedString,0,3) in ('18')      Begin          -- We only want to change the year '18', not day 18           SELECT @UnformattedString = STUFF(@UnformattedString,                              CHARINDEX('18', @UnformattedString), 2, '19')      End        -- We're getting gibberish      If Substring(@UnformattedString,0,3) not in ('19','20')                  And Len(@UnformattedString) != 6      Begin          Select @UnformattedString = Replace(@UnformattedString,                          Substring(@UnformattedString,0,3),'20')      End        -- If the 4 digit year is greater than current year, set to max date      If Convert(int, Substring(@UnformattedString,0,5)) > Year(getdate())      Begin          Set @FormattedDate = CONVERT(smalldatetime,@MaxSmallDate,1)      End      -- If the 4 digit year is less than 100 years ago, set to max date      Else If Year(getdate()) - Convert(int, Substring(@UnformattedString,0,5)) >= 100      Begin          Set @FormattedDate = CONVERT(smalldatetime,@MaxSmallDate,1)      End      Else -- valid date(we hope)      Begin          Set @FormattedDate = CONVERT(smalldatetime,@UnformattedString,1)       End        End    Return @FormattedDate  End  Go  

Make sure slave has finished reading from master

Posted: 20 Jun 2013 04:19 PM PDT

I am planning to have a master-slave system in which occasionally the master and slave will reverse roles. To do this, I will stop further updates to master, and redirect traffic to the slave. But, before I make the slave the new master, I need to make sure the slave has finished reading all the updates from the old master. How can I do that? Is running "flush logs" on the master sufficient?

An solution that can be readily scripted would be preferred.

Create table group sub-folders in Management Studio

Posted: 20 Jun 2013 12:36 PM PDT

When I was exploring the master database in SSMS I noticed that under the Tables folder there is another folder called Systems Tables that houses a bunch of tables. Is it possible for us to create a structure akin to Systems Tables within our database?

I am looking to organize tables and stored procedures into project specific folders. Under the new setup - when I am referring to my table object I would have to use the following syntax (I am guessing here):

[dbName].[projectName].[dbo].[tableName]   

Also, apart from clearing up the clutter, do anybody foresee any performance improvement/degradation because of this re-organization? I use Microsoft SQL Server 2008 R2.

SQL Import/Export defaults to byte stream on export

Posted: 20 Jun 2013 08:21 PM PDT

So, I've done some research and I can't seem to figure this out. I'm not sure if it's some setting that I'm missing or what.

For the basic info, running SQL2012, 64 bit, all that good stuff. I noticed that, for some reason, when I export from a table into a flat file using a query, the data type is defaulting to bytestream. In the past, it always defaulted to DT_STR and went through without any issues.

Here's the query:

SELECT pin from IncrementalDM_Emails  where email in (select email from [7755_forsql])  order by pin  

Here's the error I'm getting:

Error dialog

Here's what the export is showing when I select "Edit Mappings..."

Mapping dialog

Now, this can be easily fixed by simply selecting "DT_STR" under the Mappings dialog but I frequently export using this method so I'd like to find out why it's doing it and fix it so I don't always have to go into the Edit Mappings dialog.

Is it something to do with the query and the use of the IN function? Any help would be greatly appreciated!

EDIT: The data in both tables is stored as varchar(50)

MySQL bin log missing data?

Posted: 20 Jun 2013 10:34 AM PDT

I'm trying to make heads and tails of my binary logs and am coming up short. I have many entries similar to the following from mysqlbinlog but I'm missing log data that I know should be there.

# at 1050548  #130617 18:40:03 server id 1  end_log_pos 1050619   Query   thread_id=844988    exec_time=0 error_code=0  SET TIMESTAMP=1371519603/*!*/;  BEGIN  /*!*/;  # at 1050619  # at 1050782  #130617 18:40:03 server id 1  end_log_pos 1050809   Xid = 33121028  COMMIT/*!*/;  

It's puzzling because I get expected SQL in the mysqlbinlog output for statements executed in phpmyadmin but those coming from other PHP-based remote web servers appear to not be recorded.

My settings bin logging are:

bin_log = /path_to_sql_incrementals  binlog_ignore_db = mysql  

Am I missing a logging option?

Mysql 5.0.95 / CentOS 5.9

Azure compatible client tools

Posted: 20 Jun 2013 01:53 PM PDT

I'm building a DB for someone as a favor and whilst I'm perfectly OK to create a DB in Azure SQL for them and do the requisite TSQL coding they require I do also need to give them a client based way to access the data that involves no non-SQL coding from me.

Ideally this would be a light weight alternative to SSMS that is compatibile with Azure. That way I can give them a series of scripts and paramtised SPs to run. Can someone recommend something that works please?

Thanks for reading

MYSQL matching one column to another in a different table

Posted: 20 Jun 2013 11:08 AM PDT

I current have two different tables.

The first table has a list of titles and IDs associated to these titles, the second table is a list of random heading.

What I would like to know is if there is a way to match up all the titles in table2 to the closest matching title in table1 is this possible?

Ive tried :

SELECT title_table .*,random_titles.*, MATCH (title_table.title)AGAINST (random_titles.title) AS relevance FROM title_table ORDER BY relevance DESC

But that did not work.

I know I could use this query as each title is being put in table2 with PHP but I already have a lot of titles in the database.

Any help would be amazing thanks

SQL Server grant permissions to STANDBY database

Posted: 20 Jun 2013 12:25 PM PDT

So, first: the setup. We have SQL Server 2012 (ServerA) running in domain A. We have SQL Server 2012 (ServerB) running in domain B, separate subnet, no trusts. These are completely separate domains for security reasons and they cannot be joined/trusted/etc. We need to be able to query the data directly from domain B via Windows Authenticated logins. I was able to use this guide to set up transaction log shipping to get the databases from ServerA to ServerB (summary: create the transaction log shipping config, use WinSCP to copy the logs to the remote server, manually create the secondary using SQL script). So now we have the two databases running in STANDBY/read-only on ServerB.

Now, the problem: we cannot grant access to these databases because they are in read-only so we cannot modify the permissions. How can I grant read-only access to these databases (either at the server level or DB level) to a domain group from DomainB on ServerB? I've found several references to creating a SQL login on the sending side, but I can't find any way to replicate it with a Windows Auth Login.

Transaction Log Maintanance While Using AlwaysOn Availability Group

Posted: 20 Jun 2013 10:45 AM PDT

We are using HADR ( AlwaysOn Availability Group) feature of SQL Server 2012. Server and AG Configuration as below:

  1. SQL12_SRV1 --> AG1(PRIMARY) - SYNC -->> DBTest

  2. SQL12_SRV2 --> AG1(SECONDARY) -->> DBTest - ASYNC

DBTest Database is growing (200GB) Day to day approximate monthly and same Transaction Log File will also grow according to Data.

So How to minimize Transaction Log File Size by using proper way of taking LOG backup. On which Replica we have to take log backup.

Thanks In Advance.

trouble creating a database with postgreql

Posted: 20 Jun 2013 04:36 PM PDT

I just installed Postgresql on my Windows laptop. I am now trying to create my first database. I launched the Windows PowerShell and tried the following:

PS C:\Users\Chris\Documents\test> & 'C:\Program Files\PostgreSQL\9.2\bin\createdb.exe' mydb  Password:  Password:  createdb: could not connect to database template1: FATAL:  password authentication failed for user "Chris"  

From what I can gather, the password that you give PostgreSQL during installation is for a different user type? After some poking around on the internet, I modified my pg_hba.conf file by appending the following:

local   all all trust  

Now I get the following error message:

PS C:\Users\Chris\Documents\test> & 'C:\Program Files\PostgreSQL\9.2\bin\createdb.exe' mydb  createdb: could not connect to database template1: FATAL:  could not load pg_hba.conf  

Not sure what I am doing wrong. Any suggestions?


Well, this is interesting. I went back to the pg_hab.conf file and added a newline after that line of code I added earlier. I got a new error message:

PS C:\Users\Chris\Documents\test> & 'C:\Program Files\PostgreSQL\9.2\bin\createdb.exe' mydb  createdb: could not connect to database template1: could not connect to server: Connection refused (0x0000274D/10061)      Is the server running on host "localhost" (::1) and accepting      TCP/IP connections on port 5432?  could not connect to server: Connection refused (0x0000274D/10061)      Is the server running on host "localhost" (127.0.0.1) and accepting      TCP/IP connections on port 5432?  

When I installed postgreSQL on my laptop, I set the port to 5432. Looks like postgreSQL is expecting the server to be running. Going to look into this...

Is this a good strategy for importing a large amount of data and decomposing as an ETL?

Posted: 20 Jun 2013 02:15 PM PDT

I have a set of five tables (a highly decomposed schema for an ETL if I understand the nomenclature) that I'm going to load via bulk import, then run some inserts from those five tables into a SLEW of other tables, including inserts that just rely on the values I just inserted into the first tables.

I can do my inserts as an A, B, C process, where I insert into the first table, then insert into some table S where exists in A + T (being some other table that has preloaded "configuration data"), then inserting into Z where exists in B + U, etc.

Should I be trying to batch those inserts with a cursor (I know, stone the traitor) or should I just run the raw insert into scripts and let the server die a thousand heat deaths? Basically I'm worried about starving the server or causing it to collapse from too many index or something else with inserts.

Should I stage out the inserts as:

  1. Insert into one set of tables
  2. Insert into secondary tables based on the first tables
  3. Insert into tertiary tables, etc.

OR should I insert into all the tales where the data is needed but do it via cursor, in a "for loop" style pattern of 100k rows at a time.

FWIW, this is a behavior I saw from the DBAs at my last job, so I figure that's "what I'm supposed to do" (the batch process via cursors) but maybe I don't understand enough about what they were doing (they were also live-loading into systems that already had data, and were loading new data afterwards).

Also bear in mind that I'm normally a C# dev, but I've got the most TSQL experience here and I'm trying to make the best process I can for raw-loading this data as opposed to our "current" method that is mostly webservice fetches and NHibernate save-commits.

Things I think are important to the question:

  1. There will be no other load on this server when I do this, I will have complete and total control of the box, and I'm the only user interested here (this is the initial data load before anyone else can do anything with the box)
  2. There are indexes on all the tables, where appropriate, and there are FKs and other predicates
  3. The entire db system is in use for slews of other clients right now, so I know the indexes are needed for operation, but if I should do something to disable those and re-enable them to make the inserts etc faster, I'm looking for those sorts of optimization hacks.
  4. I'm waiting on sample data but I expect the maximum records in a given table to be about a million rows, fairly wide, and that I'm going to insert into those other tables at most a million rows for the ancillary tables that can come secondary or tertiary.
  5. The hardware is "average".

Dropping Hypothetical Indexes

Posted: 20 Jun 2013 10:33 AM PDT

In the past I thought I'd deleted hypothetical indexes using either a DROP INDEX statement for clustered indexes and DROP STATISTICS statement for non-clustered indexes.

I have a database that is full of DTA remnants that I would like to cleanup; however, when I try to drop the object I always receive an error telling me that I cannot drop the object "because it does not exist or you do not have permission". I am a full sysadmin on the server so would expect to have rights to do anything.

I've tried this with both DROP STATS and DROP INDEX statements but both give me the same error.

Has anyone deleted these before and is there a trick I'm missing?


Addendum

Poking around in this, I just noticed that if I R-Click on the object, both the 'Script As' and 'DELETE' options are greyed out.

SA permissions issues with many nested objects

Posted: 20 Jun 2013 04:42 PM PDT

I have a broker application that's relatively complicated.

Today, after I made some changes, I started getting the error:

The server principal 'sa' is not able to access the database 'XYZ' under the current security context.

The whole scenario up to the point of the error is:

(In Database ABC)

  • A broker message gets submitted into a queue
  • The queue reader picks it up
  • The queue reader updates a table to indicate the work is starting
  • This table has an update trigger on it. The trigger
    • Checks a table in database XYZ to make sure the inserted value for this field is valid
    • The table is accessed via a synonym

The check in the trigger I believe is what is causing the issue.

If I run the update manually, it works fine. I have also used EXECUTE AS to run the update manually as sa which works fine.

Other relevant facts:

  • sa is the owner of both database ABC and database XYZ
  • there's no funny business with the sa account - it's db_owner role in both DBs as well

Is there some sort of strange scoping happening because all this is running in the context of broker?

Updates

Some more info:

  • DB ownership chaining is on at the server level but not in the DBs. I turned it on and it made no difference.
  • Using a three part name instead of a synonym didn't make a difference
  • Profiler trace shows that the statements are running as SA and not another login
  • Setting both databases TRUSTWORTHY to ON didn't make a difference
  • If I run the queue activation proc manually, it processes correctly (under my credentials).

General tool to load dump files

Posted: 20 Jun 2013 03:33 PM PDT

I am a big fan of Postgres both for its price but also for its features. I am going to have to need to upload into it both Oracle dump and SQL Server files. I will try to ask and beg for plain .csv for schema DDL but I suspect that I will be given dmp files.

Is there a tool, most preferably open source one, that would allow me to read, profile and possibly load Oracle/SQL Server files into Postgres?

Thank you, Edmon

Optimizing bulk update performance in Postgresql

Posted: 20 Jun 2013 02:55 PM PDT

Using PG 9.1 on Ubuntu 12.04.

It currently takes up to 24h for us to run a large set of UPDATE statements on a database, which are of the form:

UPDATE table  SET field1 = constant1, field2 = constant2, ...  WHERE id = constid  

(We're just overwriting fields of objects identified by ID.) The values come from an external data source (not already in the DB in a table).

The tables have handfuls of indices each and no foreign key constraints. No COMMIT is made till the end.

It takes 2h to import a pg_dump of the entire DB. This seems like a baseline we should reasonably target.

Short of producing a custom program that somehow reconstructs a dataset for Postgresql to re-import, is there anything we can do to bring the bulk UPDATE performance closer to that of the import? (This is an area that we believe log-structured merge trees handle well, but we're wondering if there's anything we can do within Postgresql.)

Some ideas:

Basically there's a bunch of things to try and we're not sure what the most effective are or if we're overlooking other things. We'll be spending the next few days experimenting, but we thought we'd ask here as well.

I do have concurrent load on the table but it's read-only.

Thanks.

High Mysql Load , over 700% CPU

Posted: 20 Jun 2013 02:33 PM PDT

I had high mysql load on server linux 64 bit , 24 G.B ram , Intel(R) Core(TM) i7 CPU 950 @ 3.07GHz ,

Alot of quiers in sending data mode

Here is mysql status

+------+--------------+-----------+--------------+---------+------+----------------+---------------------------------------------------------------------------------------------+  | Id   | User         | Host      | db           | Command | Time | State          | Info                                                                                        |  +------+--------------+-----------+--------------+---------+------+----------------+---------------------------------------------------------------------------------------------+  | 1    | leechprotect | localhost | leechprotect | Sleep   | 507  |                |                                                                                             |  | 422  | face_book  | localhost | face_book  | Query   | 0    | Sending data   | SELECT * FROM likes WHERE userid='100002047302002' and pageid='113623891994626' Limit 1     |  | 440  | face_book  | localhost | face_book  | Query   | 0    | Sending data   | SELECT * FROM likes WHERE userid='100003610486105' and pageid='137067399778568' Limit 1     |  | 745  | face_book  | localhost | face_book  | Query   | 3    | Sending data   | SELECT * FROM likes WHERE userid='100000156154972' and pageid='259472294088694' Limit 1     |  | 813  | face_book  | localhost | face_book  | Query   | 1    | Sending data   | SELECT * FROM likes WHERE userid='100001079730269' and pageid='111612552199698' Limit 1     |  | 817  | face_book  | localhost | face_book  | Query   | 2    | Sending data   | SELECT * FROM likes WHERE userid='100003718366503' and pageid='105790599509795' Limit 1     |  | 888  | face_book  | localhost | face_book  | Query   | 0    | Sending data   | SELECT * FROM friends WHERE userid='100004673917946' and friendid='100004836366957' Limit 1 |  | 930  | face_book  | localhost | face_book  | Query   | 4    | Sending data   | SELECT * FROM likes WHERE userid='100001857826693' and pageid='379878825440539' Limit 1     |  | 940  | face_book  | localhost | face_book  | Query   | 3    | Sending data   | SELECT * FROM likes WHERE userid='100002525443288' and pageid='432454306781258' Limit 1     |  | 976  | face_book  | localhost | face_book  | Query   | 3    | Sending data   | SELECT * FROM likes WHERE userid='100001786746020' and pageid='266169045612' Limit 1        |  | 980  | face_book  | localhost | face_book  | Query   | 3    | Sending data   | SELECT * FROM likes WHERE userid='100000721604707' and pageid='188587591283392' Limit 1     |  | 999  | face_book  | localhost | face_book  | Query   | 1    | Sending data   | SELECT * FROM likes WHERE userid='100001661124181' and pageid='161323847303028' Limit 1     |  | 1033 | face_book  | localhost | face_book  | Query   | 2    | Sending data   | SELECT * FROM likes WHERE userid='100002273583367' and pageid='447287665321823' Limit 1     |  | 1064 | face_book  | localhost | face_book  | Query   | 2    | Sending data   | SELECT * FROM likes WHERE userid='100003902289965' and pageid='315361025251697' Limit 1     |  | 1100 | face_book  | localhost | face_book  | Query   | 2    | Sending data   | SELECT * FROM likes WHERE userid='100002720670629' and pageid='114370700612' Limit 1        |  | 1109 | face_book  | localhost | face_book  | Query   | 1    | Sending data   | SELECT * FROM friends WHERE userid='100002279885489' and friendid='100002385585461' Limit 1 |  | 1111 | face_book  | localhost | face_book  | Query   | 2    | Sending data   | SELECT * FROM friends WHERE userid='100001087100886' and friendid='100005456647732' Limit 1 |  | 1132 | face_book  | localhost | face_book  | Query   | 3    | Sending data   | SELECT * FROM likes WHERE userid='100005404566097' and pageid='225594034251253' Limit 1     |  | 1148 | face_book  | localhost | face_book  | Query   | 2    | Sending data   | SELECT * FROM likes WHERE userid='100005551654543' and pageid='104088939622341' Limit 1     |  | 1172 | face_book  | localhost | face_book  | Query   | 3    | Sending data   | SELECT * FROM likes WHERE userid='100000009185323' and pageid='110343285691930' Limit 1     |  | 1188 | face_book  | localhost | face_book  | Query   | 1    | Sending data   | SELECT * FROM likes WHERE userid='100003468150624' and pageid='182937471830173' Limit 1     |  | 1192 | face_book  | localhost | face_book  | Query   | 2    | Sending data   | SELECT * FROM likes WHERE userid='100000619411698' and pageid='527695070578211' Limit 1     |  | 1196 | face_book  | localhost | face_book  | Query   | 3    | Sending data   | SELECT * FROM likes WHERE userid='100002866966279' and pageid='244651575605946' Limit 1     |  | 1208 | face_book  | localhost | face_book  | Query   | 1    | Sending data   | SELECT * FROM friends WHERE userid='100001057034709' and friendid='1080136538' Limit 1      |  | 1230 | face_book  | localhost | face_book  | Query   | 1    | Sending data   | SELECT * FROM friends WHERE userid='100005246283878' and friendid='100002513789129' Limit 1 |  | 1240 | face_book  | localhost | face_book  | Query   | 2    | Sending data   | SELECT * FROM friends WHERE userid='100005028574119' and friendid='100001229276848' Limit 1 |  | 1241 | face_book  | localhost | face_book  | Query   | 2    | Sending data   | SELECT * FROM friends WHERE userid='1681467791' and friendid='1537753959' Limit 1           |  | 1242 | face_book  | localhost | face_book  | Query   | 0    | Sending data   | SELECT * FROM friends WHERE userid='100001845705855' and friendid='1668437534' Limit 1      |  | 1247 | face_book  | localhost | face_book  | Query   | 2    | Sending data   | SELECT * FROM friends WHERE userid='100003854517927' and friendid='100002322873932' Limit 1 |  | 1257 | face_book  | localhost | face_book  | Query   | 0    | Sorting result | SELECT lastpost,id FROM facesessions ORDER BY lastpost DESC                                 |  | 1276 | face_book  | localhost | face_book  | Query   | 3    | Sending data   | SELECT * FROM likes WHERE userid='100001579975664' and pageid='402312375509' Limit 1        |  | 1284 | face_book  | localhost | face_book  | Query   | 1    | Sending data   | SELECT * FROM likes WHERE userid='100001827038305' and pageid='254365179238' Limit 1        |  | 1291 | face_book  | localhost | face_book  | Query   | 3    | Sending data   | SELECT * FROM likes WHERE userid='1587203387' and pageid='197678701083' Limit 1             |  | 1309 | face_book  | localhost | face_book  | Query   | 1    | Sending data   | SELECT * FROM friends WHERE userid='100002156769339' and friendid='100001641695726' Limit 1 |  | 1318 | face_book  | localhost | face_book  | Query   | 3    | Sending data   | SELECT * FROM likes WHERE userid='100002230633105' and pageid='343669279056732' Limit 1     |  | 1325 | face_book  | localhost | face_book  | Sleep   | 8    |                |                                                                                             |  | 1333 | face_book  | localhost | face_book  | Sleep   | 2    |                |                                                                                             |  | 1338 | face_book  | localhost | face_book  | Sleep   | 0    |                |                                                                                             |  | 1339 | root         | localhost |              | Query   | 0    |                | show processlist                                                                            |  +------+--------------+-----------+--------------+---------+------+----------------+---------------------------------------------------------------------------------------------+  Uptime: 508  Threads: 38  Questions: 65938  Slow queries: 0  Opens: 51156  Flush tables: 1  Open tables: 34  Queries per second avg: 129.799  

and here is /etc/my.cnf

[client]  socket=/var/lib/mysql/mysql.sock    [mysqld]  tmpdir=/mysqltmp    port = 3306  socket = /var/lib/mysql/mysql.sock  skip-networking  skip-external-locking  local-infile=0    back_log = 100  #skip-innodb  max_connections = 50  table_cache = 32  key_buffer_size = 12284M  myisam_sort_buffer_size = 512M  myisam_max_sort_file_size = 8192M  join_buffer_size = 512M  read_buffer_size = 512M  sort_buffer_size = 512M  read_rnd_buffer_size = 512M  table_definition_cache = 8000  table_open_cache = 8000  thread_cache_size = 4  wait_timeout = 360  interactive_timeout = 60  connect_timeout = 10  tmp_table_size = 256M  max_heap_table_size = 256M  max_allowed_packet = 512M  max_seeks_for_key = 1000  group_concat_max_len = 1024  max_length_for_sort_data = 1024  net_buffer_length = 16384  max_connect_errors = 100000  concurrent_insert = 2  bulk_insert_buffer_size = 8M  query_cache_limit = 1M  query_cache_size = 256M  query_cache_type = 1  query_prealloc_size = 262144  query_alloc_block_size = 65536  range_alloc_block_size = 4096  transaction_alloc_block_size = 8192  transaction_prealloc_size = 4096  default-storage-engine = MyISAM  max_write_lock_count = 8    innodb_open_files = 500  innodb_data_file_path= ibdata1:10M:autoextend  innodb_buffer_pool_size = 64M  innodb_additional_mem_pool_size = 32M    innodb_log_files_in_group = 2  innodb_log_file_size = 5M  innodb_log_buffer_size = 8M  innodb_flush_log_at_trx_commit = 2  innodb_thread_concurrency = 8    #slow_query_log=0  #long_query_time=1  #slow_query_log_file=/var/log/mysql/log-slow-queries.log    open_files_limit=50000  [mysqld_safe]  log-error=/var/log/mysqld.log  pid-file=/var/run/mysqld/mysqld.pid  nice = -5  open-files-limit = 8192    [mysqldump]  quick  max_allowed_packet = 512M    [myisamchk]  key_buffer_size = 6400M  sort_buffer_size = 1M  read_buffer_size = 1M  write_buffer_size = 1M    [mysqlhotcopy]  interactive-timeout  

i tried to optimize table and adjust my.cnf with mysqlreport still the same

i don't use InnoDB

mysql version

# mysql -V  mysql  Ver 14.14 Distrib 5.1.68, for unknown-linux-gnu (x86_64) using readline 5.1  

mysql> SHOW CREATE TABLE friends\G

*************************** 1. row ***************************         Table: friends  Create Table: CREATE TABLE `friends` (    `id` int(100) unsigned NOT NULL AUTO_INCREMENT,    `userid` mediumtext COLLATE latin1_general_ci,    `friendid` mediumtext COLLATE latin1_general_ci,    `name` varchar(255) COLLATE latin1_general_ci NOT NULL,    `dateline` varchar(255) COLLATE latin1_general_ci NOT NULL,    `lastsend` varchar(255) COLLATE latin1_general_ci NOT NULL,    `sendstatus` varchar(255) COLLATE latin1_general_ci NOT NULL,    PRIMARY KEY (`id`),    KEY `name` (`name`),    KEY `lastsend` (`lastsend`),    KEY `sendstatus` (`sendstatus`)  ) ENGINE=MyISAM AUTO_INCREMENT=1079024 DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci  1 row in set (0.00 sec)  

mysql> SHOW CREATE TABLE likes\G

*************************** 1. row ***************************         Table: likes  Create Table: CREATE TABLE `likes` (    `id` int(100) unsigned NOT NULL AUTO_INCREMENT,    `userid` mediumtext COLLATE latin1_general_ci,    `pageid` mediumtext COLLATE latin1_general_ci,    `name` varchar(255) COLLATE latin1_general_ci NOT NULL,    `link` varchar(255) COLLATE latin1_general_ci NOT NULL,    `dateline` varchar(255) COLLATE latin1_general_ci NOT NULL,    `lastsend` varchar(255) COLLATE latin1_general_ci NOT NULL,    `sendstatus` varchar(255) COLLATE latin1_general_ci NOT NULL,    PRIMARY KEY (`id`)  ) ENGINE=MyISAM AUTO_INCREMENT=2008744 DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci  1 row in set (0.00 sec)  

mysql> SHOW CREATE TABLE facesessions\G

*************************** 1. row ***************************         Table: facesessions  Create Table: CREATE TABLE `facesessions` (    `id` int(100) unsigned NOT NULL AUTO_INCREMENT,    `session_key` varchar(255) COLLATE latin1_general_ci NOT NULL,    `uid` varchar(255) COLLATE latin1_general_ci NOT NULL,    `expires` varchar(255) COLLATE latin1_general_ci NOT NULL,    `secret` varchar(255) COLLATE latin1_general_ci NOT NULL,    `access_token` varchar(255) COLLATE latin1_general_ci NOT NULL,    `sig` varchar(255) COLLATE latin1_general_ci NOT NULL,    `username` varchar(255) COLLATE latin1_general_ci NOT NULL,    `lastposttime` varchar(255) COLLATE latin1_general_ci NOT NULL,    `lastpost` varchar(255) COLLATE latin1_general_ci NOT NULL,    `nextsend` varchar(50) COLLATE latin1_general_ci DEFAULT NULL,    `lastpoststatus` varchar(50) COLLATE latin1_general_ci DEFAULT NULL,    `gender` varchar(20) COLLATE latin1_general_ci DEFAULT NULL,    `birthday` varchar(20) COLLATE latin1_general_ci DEFAULT NULL,    `location` varchar(50) COLLATE latin1_general_ci DEFAULT NULL,    `imported` int(10) DEFAULT NULL,    PRIMARY KEY (`id`),    KEY `uid` (`uid`),    KEY `access_token` (`access_token`),    KEY `sig` (`sig`),    KEY `username` (`username`),    KEY `lastposttime` (`lastposttime`),    KEY `lastpost` (`lastpost`),    KEY `nextsend` (`nextsend`),    KEY `lastpoststatus` (`lastpoststatus`)  ) ENGINE=MyISAM AUTO_INCREMENT=16238 DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci  1 row in set (0.00 sec)  

mysql> SELECT SUM(index_length) FROM information_schema.tables WHERE engine='MyISAM';

+-------------------+  | SUM(index_length) |  +-------------------+  |         150684672 |  +-------------------+  1 row in set (0.01 sec)  

How do I execute an Oracle SQL script without sqlplus hanging on me?

Posted: 20 Jun 2013 01:33 PM PDT

For an automated task I would very much like to run some SQL scripts and make sure that sqlplus does not hang under any circumstancees, i.e.:

  • If the script contains any named substitution variable that has to be entered by the user, sqlplus should return with an error instead of prompting for the variable -- I cannot use set define off, as some of these scripts need to accept command line parameters that have to be resolved with &1
  • The script must not "hang" when it doesn't contain an exit; at the end.

    Solved: I think now that I can achieve this by wrapping the sql-script in a secondary "caller script". I.e., the caller script calls the other script with @ and then has a fixed exit; after that. The other script doesn't need an exit that way.

  • Anything else: If it would require a prompt, it should return with an error.

How can i do this with Oracle (and sqlplus or something else)?

Second time query execution using different constants makes faster?

Posted: 20 Jun 2013 11:33 AM PDT

Can someone explain or direct me how execution on indexes happen with different constants at intervals in Mysql. I notice only for the first execution on the table it takes time, after that with different constants it executes the query very quickly. I would like to know how to execute the query in such a way that it should take same amount of time every time it executes with different constants, is there a way to set some parameter off / on?

Query executed time : 9 mins.

mysql>  EXPLAIN SELECT     chargetype,    COUNT(br.`id`),   SUM(br.`charge`)  FROM  billingreport AS br  WHERE     br.`addeddate` BETWEEN '2013-02-01 00:00:00'    AND '2013-02-28 23:59:59'  AND br.`status` = 'success'    AND br.`idvendor` = 10     GROUP BY chargetype \G     *************************** 1. row ***************************         id: 1  select_type: SIMPLE      table: br       type: index_merge   possible_keys: NewIndex3,NewIndex6,idx_br_status        key: NewIndex3,idx_br_status     key_len: 4,1        ref: NULL       rows: 2887152      Extra: Using intersect(NewIndex3,idx_br_status); Using where; Using temporary; Using filesort  1 row in set (0.00 sec)  

Query executed time : 18 Secs.

 mysql>  EXPLAIN SELECT     chargetype,    COUNT(br.`id`),   SUM(br.`charge`)     FROM     billingreport AS br  WHERE     br.`addeddate` BETWEEN '2013-01-01 00:00:00'    AND    '2013-01-31 23:59:59'    AND br.`status` = 'success'    AND br.`idvendor` = 10  GROUP BY chargetype \G  *************************** 1. row ***************************         id: 1    select_type: SIMPLE      table: br       type: index_merge   possible_keys: NewIndex3,NewIndex6,idx_br_status        key: NewIndex3,idx_br_status    key_len: 4,1        ref: NULL       rows: 3004089      Extra: Using intersect(NewIndex3,idx_br_status); Using where; Using temporary; Using filesort   1 row in set (0.01 sec)  

SSRS appears to be ignoring Permissions set using Report Manager

Posted: 20 Jun 2013 05:33 PM PDT

I have setup SSRS on SQL Server 2008 in native mode.

As an administrator I can login to report manager, upload reports and run them, and also use the Web Service URL to generate reports.

I have also created a local user on the machine, I went into Report Manager as Admin, and at the top level set permissions that should assign the local user to all roles.

When I login to the machine as that user, and then navigate to Report Manager I just get the heading for the page, but do not see any of the folders that are configured.

I've checked and the folders are set to inherit parent permissions and they are showing the newly created local user in there too.

It seems odd that I have set the permissions, yet SSRS is still not showing what I should be able to see. Is there another step I need to take other than configuring the permissions in Report Manager?

When logged in as the newly created local user:

Report Manager - Shows the heading for the page, but no folders/items    Web Service URL (http://machine/ReportServer) - rsAccessDeniedError  

postgis problem with shortest distance calculation

Posted: 20 Jun 2013 12:33 PM PDT

while working with POSTGIS pgrouting, for calculateing the distance between two roads(lines) i got the shortest_path function.

But the logic is based on Start_point(Start_id) and end_point(end_id) but in my data the linestring contains so many internal points like ('linestring(1 1,2 2,3 3,4 4,5 5)' just for example..)

it is taking start point (1 1) endpoint(5 5)

if other line starting with (5 5) it is showing as route...like ('linestring(5 5,6 6)')

But line which crossing the point inside the linestring like(2 2,3 3,4 4) which is not telling as connected.. example

table roads: id name way 1 A linestring(1 1,2 2,3 3,4 4,5 5) 2 B linestring(5 5,6 6) 3 c linestring(2 1,2 2,2 3)

if i am applying shortest_path function from point(1 1) to (6 6) its showing the way but for (1 1) to (2 3) it is not showing anything...but there is a route for this (1 1,2 2,2 3)

can anyone please help me out for finding the solution..

Regards Deepak M

How do I find my current SCN?

Posted: 20 Jun 2013 01:07 PM PDT

Given any version of Oracle:

  • How do I find my current SCN?
  • What is the maximum possible SCN?

Meaning of 'SET' in error message 'Null value is eliminated by an aggregate or other SET operation'

Posted: 20 Jun 2013 10:55 AM PDT

I saw the above 'ANSI warning' message today when running a colleague's script (and I don't know which of the many statements caused the warning to be shown).

In the past I've ignored it: I avoid nulls myself and so anything that would eliminate them is a good thing in my book! However, today the word 'SET' literally shouted out at me and I realised I don't know what the meaning of the word is supposed to be in this context.

My first thought, based on the fact it is upper case, is that it is referring to the SET keyword and means 'assignment', as in

UPDATE <table> SET ...    ...ON DELETE SET NULL...    SET IDENTITY_INSERT <table> ON  

According to the SQL Server Help, the 'ANSI warnings' feature is based on ISO/ANSI SQL-92, the spec for which makes just one use of the term 'Set operation' in a subsection title, hence in title case, in the data assignment section. However, after a quick Googling of the error message I see examples that are SELECT queries with seemingly no assignment involved.

My second thought, based on the wording of the SQL Server warning, was that the mathematical meaning of set is implied. However, I don't think that aggregation in SQL is strictly speaking a set operation. Even if the SQL Server team consider it to be a set operation, what is the purpose of putting the word 'set' in capitals?

While Googling I noticed a SQL Server error message:

Table 'T' does not have the identity property. Cannot perform SET operation.  

The same words 'SET operation' in the same case here can only refer to the assignment of the IDENTITY_INSERT property, which brings me back to my first thought.

Can anyone shed any light on the matter?

Can you give me one example of Business Intelligence?

Posted: 20 Jun 2013 06:11 PM PDT

I don't really understand what Business Intelligence is all about. If I start from having a corporate DB, what is it that a BI person would do? I found plenty of material on the web, but it usually is a bit too complex. I want a simple example that would make me understand what BI is all about and what would a BI person produce that is of value to my organization.

1 comment:

  1. The most effective method to Solve MySQL 1241 Error Message through MySQL Technical Support
    In the event that you are attempting to pull the information from a table between two dates however until and unless you get a blunder message which insinuates that "Database Error: Problem executing question". Well! In the event that you have any plan to take care of this issue then it’s great yet of not then don't stress, we are here to raise you hell free. We will tell you the Cognegic's MySQL Remote Support or MySQL Remote Service Support who is master in unraveling MySQL related issues. Through and through our specialized specialists tackle your inquiries and give you propel bolster.
    For More Info: https://cognegicsystems.com/
    Contact Number: 1-800-450-8670
    Email Address- info@cognegicsystems.com
    Company’s Address- 507 Copper Square Drive Bethel Connecticut (USA) 06801

    ReplyDelete

Search This Blog