Thursday, August 8, 2013

[how to] Complex constraint across all data in a table in Postgres

[how to] Complex constraint across all data in a table in Postgres


Complex constraint across all data in a table in Postgres

Posted: 08 Aug 2013 07:53 PM PDT

We have a table to record processing that occurs on a system and we need to ensure that only a single row has an 'in process' status.

I want to ensure that the result of this is always either zero or one:

select count(id) from jobs where status in ('P', 'G');  

We're using explicit transactions so ideally this check would happen on commit and abort the transaction if the invariant doesn't hold. Handling any racy processing that throws the occasional error is much easier for us to deal with than suddenly ending up with more than one job that is 'in process'.

The solution only needs to work with Postgres so we're quite happy to take a non-standard solution to this. We're currently on 8.4, but we'll be upgrading at some point to 9.x if that makes any difference.

Create Language plperl - Error: could not load library plperl.dll

Posted: 08 Aug 2013 07:35 PM PDT

When I create language plperl , I get error: ERROR: could not load library "C:/Program Files/PostgreSQL/9.1/lib/plperl.dll": The specified module could not be found.

But in my computer, "plperl.dll" file is exist in "C:/Program Files/PostgreSQL/9.1/lib/..." folder ( I can not post illustrative image, this forum require >= 10 reputations)

And if I select * pg_pltemplate, I get:

-[ RECORD 4 ]-+-------------------------  tmplname      | plperl  tmpltrusted   | t  tmpldbacreate | t  tmplhandler   | plperl_call_handler  tmplinline    | plperl_inline_handler  tmplvalidator | plperl_validator  tmpllibrary   | $libdir/plperl  

PostgreSQL 8.0 - Windows Server 2008 R2 install - Service won't start

Posted: 08 Aug 2013 05:43 PM PDT

I am trying to install postgreSQL 8.0 on a windows server 2008 R2 machine. After the installation, the service does not start. When I try to start it manually, I get a message saying that the service started and stopped automatically. I tried running the service on a postgres account and as a local system account (in the service properties).

I don't know if this is relevant, but during the installation, I got a few alert messages about "procedural languages" and "contrib files" being installed but not activated on the database.

Every log file in pg_log is empty.

Does anyone know what's going on here?

Any suggestions would be much appreciated

How to link a customer to a country when the city/state isn't known

Posted: 08 Aug 2013 03:44 PM PDT

We have a situation where medical systems are sold to hospital groups, but at the time of the order, we don't always know into which state and city the system will be installed. We always know the country, as hospital groups never span countries.

We want to be able to save as much information as we know, so if we know the city, state and country, we want to save all three. If we only know state and country, we want to save those two. If we only know the country, we want to save just that.

Now, I can see three ways of doing this, and was hoping someone would be able to comment as to which is preferable...

1) Have three tables, Countries, States and Cities, each of which is linked from the previous, so any city will have a StateID, and any state will have a CountryID. That way, if we know the city, we can work up the links and find out the state and country. This is neat, and preserves data integrity, but has the disadvantage that if we don't know the city, we can't save it.

One way we thought of to get around this was for each country to contain a placeholder state, that wouldn't be shown in the UI, and for each state to hold one placeholder city. That way, if we only know the country, the placeholder state for that country would be saved, as would the placeholder city for the state. This neatly gets around the problem, but requires placeholder states and cities to be inserted. This isn't a performance issue, as there wouldn't be enough of them to be a problem, but feels a bit like a hack.

2) Another approach is to have a single self-referencing Locations table, that contains a LocationTypID that identifies if the location is a country, state or city, as well as a ParentLocationID, which (if non-null) would refer to the containing location (ie states contain cities and countries contain states). That way, we can save the most granular location that we know.

The disadvantage of this is that the querying would be messy. We are using an ORM, which would work very well with the first approach, as you can choose a country, then navigate to the Regions property to see the associated regions, then for any region, use the Cities property. Approach #2 would require a manual query every time you wanted to get a different entity.

This approach is very flexible, but is probably more flexible than we need, and probably not worth the extra effort required to implement it.

3) The final suggestion was for the Systems table to have a CountryID, a StateID and CityID, and populate whichever one(s) is/are known. This is simple, and doesn't require any placeholder entries, but has one huge disadvantage (in my non-expert opinion), and that is that it seems to involve repeated data which can easily become invalid.

For example, suppose CountryA contains StateA, which in turn contains CityA, whereas CountryB contains StateB, which in turn contains CityB - imagine the same with Cs. There is little to stop someone mistakenly setting a system to be in CountryA, StateB and CityC, even though CityC is not in StateB, and StateB is not in CountryA.

Having played around with this a bit, my feeling is strongly towards the first approach, despite the requirement for placeholder rows, but I would like to hear what the experts have to say.

Anyone any comments?

MySQL data retained after OS reinstall

Posted: 08 Aug 2013 03:04 PM PDT

Something strange happened today.

I reinstalled my OS and then when I reinstalled MySQL workbench some of my old data was still there. I think the actual local db is empty, but all the connections, models, and listed servers are still there. I can't view which users still exist because I get an connection error. The server is running, though.

I have a backup. I do a dump nightly. I was going to restore from that, but I expected a clean slate when I reinstalled. Should I delete everything before moving ahead with the restore?

Does anyone have an idea why/how that happened?

Is PostgreSQL appropriate for processing this large but simple dataset?

Posted: 08 Aug 2013 02:07 PM PDT

I have a dataset I'm not sure how to store. The structure is simple: 30 numeric attributes identified by about 15 billion x, y, and t values. We're expecting ~17k t values and maybe 90k x/y combinations, though it could be some combination that gives us 20 million records in the end.

The processing involves retrieving 1-10 columns for each x and y pair and storing various calculated numeric values. Are we nearing/passing the limit of fast response times for Postgres with this many rows?

The processing is all done in-house by one person and shouldn't need to happen more than a couple dozen times as we settle on what summaries we want. So we're not worried about a high number of writes or connections, strict security, making updates to existing records, table relations, losing data because of network connection issues. Basically, we're not concerned about the kinds of things I understand the ACID part of a RDBMS like Postgres brings to the table. But we also don't need to replicate or distribute the data, high availability, change the schema on the fly, or manage an unusual number of writes (say a dozen for each of the 90k x/y pairs)- the kinds of things I understand NoSQL DBs offer.

So I guess the real issue is read-speed out of Postgres for a table of this size and the real question is whether there's a better storage mechanism for what we need to accomplish. Does the answer change if we have 40 billion records? 60?

Condense Query Results with Wildcard

Posted: 08 Aug 2013 02:08 PM PDT

I have the following query:

select programVersion, count(programVersion)        from records R        inner join Programs P            on P.programID=R.usageProgramID        where programName="word"        group by programVersion;  

Results:

+----------------+-----------------------+  | programVersion | count(programVersion) |  +----------------+-----------------------+  | 12.0.0         |                     1 |  | 12.2.0         |                 32190 |  | 12.2.3         |                    20 |  | 12.2.6         |                  3990 |  | 12.2.8         |                 13128 |  | 12.3.1         |                   114 |  | 12.3.2         |                     4 |  | 12.3.4         |                    20 |  | 14.0.0         |                    57 |  | 14.1.0         |                 86587 |  | 14.1.3         |                    44 |  | 14.1.4         |                   728 |  | 14.2.0         |                    42 |  | 14.2.2         |                   130 |  | 14.2.3         |                 88219 |  | 14.2.5         |                    66 |  | 14.3.0         |                   126 |  | 14.3.1         |                    36 |  | 14.3.2         |                    74 |  +----------------+-----------------------+  

For this purpose I don't really care about the minor version increments. What I'm really interested in is the differences in major increments. Is there a way that I can get the results combined past the first period?

Example Output

+----------------+-----------------------+  | programVersion | count(programVersion) |  +----------------+-----------------------+  | 12             |         total of 12's |  | 14             |         total of 14's |  +----------------+-----------------------+  

how to run Db2 export command in shell

Posted: 08 Aug 2013 05:14 PM PDT

I am trying to run the following db2 command through the python pyodbc module

IBM DB2 Command : "DB2 export to C:\file.ixf of ixf select * from emp_hc"

i am successfully connected to the DSN using the pyodbc module in python and works fine for select statement

but when i try to execute the following command from the Python IDLE 3.3.2

cursor.execute(" export to ? of ixf select * from emp_hc",r"C:\file.ixf") pyodbc.ProgrammingError: ('42601', '[42601] [IBM][CLI Driver][DB2/LINUXX8664] SQL0104N An unexpected token "db2 export to ? of" was found following "BEGIN-OF-STATEMENT". Expected tokens may include: "". SQLSTATE=42601\r\n (-104) (SQLExecDirectW)')

or cursor.execute(" export to C:\file.ixf of ixf select * from emp_hc")

Traceback (most recent call last): File "", line 1, in cursor.execute("export to C:\myfile.ixf of ixf select * from emp_hc") pyodbc.ProgrammingError: ('42601', '[42601] [IBM][CLI Driver][DB2/LINUXX8664] SQL0007N The character "\" following "export to C:" is not valid. SQLSTATE=42601\r\n (-7) (SQLExecDirectW)')

am i doing something wrong ? any help will be greatly appreciated.


From what i came to know db2 export is a command run in shell, not through SQL via odbc.

can you please give me some more information on how to run the command in the shell i am confused and what does that mean ? any guide or small quick tutorial will be great

How can I generate TPC-DS queries for PostgreSQL?

Posted: 08 Aug 2013 01:14 PM PDT

All,

I am using TPC-DS to run some benchmarks on a few versions of postgres that I have. I noticed that there is no template for generating the TPC-DS queries with the target database as postgres. The query_templates directory contains template files for ansi, sqlserver, db2, oracle, and netezza, but none for postgres.

I have tried running the queries generated by each of the previously listed targets on postgres, but they all seem to have some kind of syntax that postgres doesn't support, because most of them fail to run.

Do you know if it is possible to generate these TPC-DS queries for postgres? Or is there a postgres.tpl file that can be downloaded somewhere?

Thanks.

Grouping and counting rows by existence of matching criteria in any of several columns

Posted: 08 Aug 2013 12:11 PM PDT

Disclaimer: I've inherited a database which is probably not quite as "normal" as I would like, and I'm limited to Jet/MS Access 2010 for my db backend and interface, as well my choice of reporting software (currently Style Intelligence, which has its own data interpretation interface (supports standard SQL, etc.). I'm open to some data restructuring if necessary.

I have several tables of assessment data from which I need to count rows that "pass" or "fail" the assessments (one row = one assessment). Each row has a series of columns which contain the answers YES, NO, NA or null, and these answers comprise the assessment. A NO value in any of the answer columns constitutes a fail. I need a count of rows that contain NO in any of these columns, and then a count of the remainder (or, no "No" values in any of the answer columns. I don't need to count the number of NOs on one row, just group and count by the existence of at least 1 NO answer in any of the given columns. There are other tables which store assessment data in a similar structure, but instead use 1, 2 and 3 as the "yes, no, NA" answers, because they're storing the value from an MS Access form's option group.

Here are some examples of rows that I have, and how they would be grouped for counting:

AsmtID  Q1      Q2      Q3      Q4      Q5      Q6      Q7      Desired grouping    3       NO      YES     YES     YES             NA              FAIL  4       NO      YES     YES     YES     YES     NO      NA      FAIL  5       YES     YES     YES     YES     YES     YES     YES     PASS  6       YES     YES     YES     YES     YES     YES     NA      PASS  

It's worth noting that I'm working with a very small amount of data, certainly no more than 200 rows from any one table at a time, so computational efficiency isn't a big concern for me. My best guess would be some routine which steps through each column on a row and then returns Pass or Fail based on the first NO value found. I would really appreciate any insight on how to interpret this data. Thank you in advance!

Full-text Search XML Properties

Posted: 08 Aug 2013 05:56 PM PDT

I'm trying to determine whether I can retrieve a property (attribute) value from an xml blob stored in a FILESTREAM-enabled column using Full-text search. Here is a sample where message is a varbinary(max) field that stores xml messages:

SELECT message FROM AuditTable  WHERE CONTAINS ( PROPERTY ( message, 'AccountNo' ), '123456');  

On the MSDN page Search Document Properties with Search Property Lists, the summary at the top of the page includes this statement: "On the other hand, the XML IFilter does not emit properties."

I'm confused because it appears that this should work for an XML document. I'm starting to prep an environment to test, but I wanted to ask first just in case someone already knew.

Conversation_handle in "CO" but unable to send the message successfully

Posted: 08 Aug 2013 05:27 PM PDT

I have an issue with Service Broker. This is first time I am working with service broker to debug the issue rather than developing. But the issue is that some of the conversation_handle in sys.conversation_endpoints which are still marked as CO and when Service Broker sends messages from the initiator queue to target queue it uses the conversation_handle for the corresponding @@SPID but the problem is the target queue is unable to initiate the conversation_handle to receive the message.

I see my message still sits in the sys.transmission_queue and see that all messages sent by the same conversation_handle did not make it to the target queue, but interesting is that all the conversation_handle are still marked as CO in sys.conversation_endpoints.

I ran profiler too and I selected all broker events and that is how I came to know the conversation_handle on the target queue is not initiated as soon as message is sent.

I'm using SQL Server 2005 SP2.

How do I reclaim reserved space in the sysjobstepslogs table

Posted: 08 Aug 2013 05:20 PM PDT

I have a 36GB msdb database in my SQL Server 2008 R2 database server. From what I have read this is past the point of being gigantic. When I look at the tables in the msdb database the sysjobstepslogs table is using 97% of the space in the database. I've done enough research to find out that the cause was several jobs had steps that had the Log to table option set + long running transactions + snapshot isolation on the msdb database. This option has been disabled on those jobs.

I have tried to reduce its size by doing a truncate table, reindex, reorganize, check table, shrink file, shrink database (pretty much in that order) but with 0 rows, the sysjobstepslogs table still indicates that is has 35+GB reserved and used.

It has been suggested that I disable then reenable snapshot isolation to try to free up the space but according to this MSDN forum discussion you cannot disable snapshot isolation on the msdb database.

I have contemplated dropping and recreating the table but how do I recreate the table as a system table and even if I did, is such a thing wise?

Any ideas how to reclaim this space?

how to explain indexes

Posted: 08 Aug 2013 11:21 AM PDT

This question is about teaching: When I learned databases, we talked about indexes being like the card catalog in the town library. We had at least an author index, a title index, then the Dewey decimal number was the access path to the book, which is like the whole record in the database.

Books? What, on paper? Index cards in drawers?

The youngsters don't really know what I'm talking about, so how do we best explain it nowadays?

(Feel free to enjoy my lawn, just please recognize the difference between the grass and the astroturf at the prep school, ok?)

Does the Binary Data column shown in SQL Profiler contain useful information?

Posted: 08 Aug 2013 10:06 AM PDT

Many of the events that SQL Profiler captures include a Binary Data column. The documentation for these event classes merely repeats the fact that this column contains "Binary value dependent on the event class captured in the trace."

I'm particularly interested in the RPC:Starting and RPC:Completed events; I notice that the binary data for each event appears the same whenever the same query is executed, although the Starting and Completed events have different binary data. But other events also display information here that I would be interested in decoding. This assumes, of course, that the data would be even relevant or meaningful.

Is the content of these fields documented anywhere, and is it ever meaningful to a developer/administrator?

Select date from another table [migrated]

Posted: 08 Aug 2013 12:30 PM PDT

I really need your help. Please help me out to get this to work.

I have these two tables, the relation between them is the user_id. I accomplished to select who I follow and to display their images into my wall. However, I have this strange issue: the photos are sorted by DESC as i want, but they are sorted by the user_id. This means that whoever I follow first, their image will be sorted first by DESC.

I tried almost every possible way to get the photos to be sorted base on DESC newest photo as the top, but I couldn't. Here are my tables and i'll show you every possible thing I tried:

CREATE TABLE IF NOT EXISTS `photos` (    `id` int(11) NOT NULL AUTO_INCREMENT,    `img` varchar(255) NOT NULL,    `about` text NOT NULL,    `date` varchar(222) NOT NULL,    `user_id` varchar(255) NOT NULL,    `likes` int(255) NOT NULL,    `down` int(255) NOT NULL,    PRIMARY KEY (`id`)  ) ;    CREATE TABLE IF NOT EXISTS `follow` (    `id` int(11) NOT NULL AUTO_INCREMENT,    `followers` varchar(255) NOT NULL,    `following` varchar(255) NOT NULL,    PRIMARY KEY (`id`)  }  

One

followers_photo.php, where I retrieve the following id from:

$the_user = mysql_query("SELECT * FROM users WHERE username = '$getuser'");  $tuser=mysql_fetch_array($the_user);  $tu_id = $tuser['id'];    $followers = mysql_query("SELECT distinct follow.*       FROM follow JOIN photos ON (follow.followers  = $tu_id)  order by photos.date DESC");  $f_s_count = mysql_num_rows($followers);      
  • index.php which i display the images here.

while($uim = mysql_fetch_assoc($followers)){ $i_id = $uim['following']; $followers_image = mysql_query("SELECT distinct photos.* FROM photos JOIN follow ON (photos.user_id = $i_id) GROUP BY RAND() order by date DESC");


The above is working, but as I mentioned it sorts the images based on date DESC and the user_id which I don't want. I want it to stop sorting the image based on the user_id

Two

followers_photo.php

    $the_user = mysql_query("SELECT * FROM users WHERE username = '$getuser'");  $tuser=mysql_fetch_array($the_user);  $isuser = mysql_num_rows($the_user);    $tu_id = $tuser['id'];  $tu_name =$tuser ['username'];  ////users whos following Me  $followers = mysql_query(" SELECT distinct follow.* FROM photos join  follow on  follow.followers = photos.user_id WHERE follow.followers = $tu_id order by photos.date DESC");  //  $f_s_count = mysql_num_rows($followers);  

index.php

    while($uim = mysql_fetch_assoc($followers)){  $i_id = $uim['following'];  $followers_image = mysql_query("SELECT * FROM photos  WHERE user_id = '$i_id' order by date DESC ");  

The above does the same as in the first step. Could anyone point me to the right way? Sort the photos from the people I follow based on date DESC, who posts last comes first. Thanks guys, I appreciate your help a lot. And sorry for my bad English.

Update

now with the solution that vegatripy give , images are duplicating and not order as wanted. on left is what is showing now. what i want is to sort as the right image. any ideas? http://i.stack.imgur.com/Cod6z.jpg

Solved

    $followers = mysql_query("SELECT PHOTOs.IMG, follow.followers, follow.following, photos.date,photos.img,photos.likes,photos.down,photos.id,photos.about  FROM FOLLOW  JOIN PHOTOs ON ( PHOTOs.USER_ID = FOLLOW.FOLLOWING )   WHERE FOLLOW.FOLLOWERS =31  ORDER BY PHOTOs.DATE DESC");  $f_s_count = mysql_num_rows($followers);  while($fimage = mysql_fetch_assoc($followers)){  $i_id = $fimage['following'];  $disimg = $fimage['img'];  $disid = $fimage['id'];  $distime = $fimage['date'];  $i_like=$fimage['likes'];  $i_unlike=$fimage['down'];  

SQL Server DR, mirroring or virtualise and mirror

Posted: 08 Aug 2013 10:43 AM PDT

I have a client with a small business server 2008 Premium installation. This obviously includes SQL Server 2008 standard on a separate 2008 server. On the SQL we run several line of business applications.

The client has got the the size where if we were to have a failure it would cripple the business. We have ample backups protecting the data, but it would obviously take time to get a new server up and running etc.

What we want is a stand by server in a remote office, but I need to know the best way of replicating the SQL server. I see there are different ways, snap shot or transactional etc, but I haven't found a decent article describing the pros and cons of them.

I dont want to put much more stress onto the SQL server as it is quite busy with both reads and updates. I don't want the primary server waiting on the secondary server to update before it commits a write.

Also the table structure and views change fairly regularly as new features/updates are added to the LOB applications.

I'm wondering if anyone has any opinion on what method may help.

Also wondering if we would be better off virtualising the server and mirroring this, which I believe is quite good with the new hyper V.

edit (OP, once you have associated your accounts, please incorporate this further information to make a single, cohesive question.)

sorry for the delay.

I suppose in terms of data loss, i suppose an hour wouldnt be the end of the world.

Hardware, wouldnt be identical as its currently on an hp ml350 g6

i was talking about hyper-v 2012s replication to a completely seperate server.

I guess i am concerned about disk io as one of the lob dbs is pretty hefty and slow, although we got the developers to re-write some of the worst bit to make it better.

I guess my biggest concern is if when writing data it has to wait for both the local and remote server to update. Does log shipping need do this?

The remote server will be on remote ADSL connection, and i would like to avoid having to replicate it twice.

Also the SQL server is part of Small business server 2008 premium, but was brought as OEM, are we allowed to either move the second windows server license onto another machine (the main server will remain on the origional hw) or can we move just the SQL onto a new windows 2012 box? as this would mean we can move the sql onto better hw, and use the old one as the DR box.

How to avoid timeouts during big DML operations

Posted: 08 Aug 2013 05:21 PM PDT

I have a big insert script I need to run. Its about 55,000 records and 160 columns. The script is already created and I can't create it again.

The problem I have is that this runs for about 4 hours or so, and during that time the system that uses this database gets really slow and timeout a lot.

I would not care if my INSERT is slower but it shouldn't impact other users.

I was thinking in doing some batch of let's say 500 rows and use the WAITFOR, but was wondering if there could be a better option for doing this.

Calculated Measure to get only most current from one dimension on snapshot fact but keep other filters

Posted: 08 Aug 2013 10:07 AM PDT

I'm working on a tabular cube in SSAS 2012 SP1 CU4. I have 3 dimensions (Requisition, Requisition Status, Date) and 1 fact (Requisition Counts). My fact table is at the grain of requisitionKEY, RequisitionStatusKEY, SnapshotDateKey.

I have calculated measures that essentially get the lastnonempty value (like a semi-additive measure) for the given period whether it is Year, Month, Or Date:

Openings:=CALCULATE(Sum('Requisition Counts'[NumberOfOpeningsQT]),   Filter('Date','Date'[DateKey] = Max('Requisition Counts'[SnapshotDateKEY])))    

This works well until you throw Requisition Status into the mix. I have rows for each requisition for every day in 2013. For one of the requisitions, the Requisition Status was Pending for the first 6 months and then it changed to Approved on all data from July 1 to date. When I summarize the number of openings for the requisition at the month level for July, users see two rows: the sum of the openings for the last populated day in July that it was pending and the sum of the openings for the last populated day in July that it was approved.
Pivot Table

Although the total of 2 is correct, I need to change this calculation so that I only get the most current requisition status for the date period selected (Approved) and either show 0 or null or blank for the Pending approval line in the pivot table.

The Requisition Status table looks like this: Requisition Status

Update: Here is a link to a PowerPivot model I made that has some scrubbed data in it to help answer this question. This should better explain the table schemas. The NumberOfOpeningsQT field is basically the number of open positions they have for that job. Sometimes it is 1, sometimes is is more than 1. It doesn't usually change over time, but I guess it could. I'm trying to make the Openings calculation give me the desired answer. I threw some other calculations in there so show some things I had tried that had promise but that I couldn't get to work.

Stored procedures vs. inline SQL

Posted: 08 Aug 2013 05:23 PM PDT

I know stored procedures are more efficient through the execution path (than the inline sql in applications). However, when pressed, I'm not super knowlegeable about why.

I'd like to know the technical reasoning for this (in a way that I can explain it to someone later).

Can anyone help me formulate a good answer?

High CPU usage on SQL server - Slow queries [closed]

Posted: 08 Aug 2013 05:22 PM PDT

Our MS SQL Server is using about 95% of the CPU-power.

After a server (hardware) restart, or a SQL-Service restart, the usage is 0% and slowly increases over the course of 1-3 days. Depending on how much it is used.

When it's over 80%, every query is extremely slow.

Our website is dealing with alot of big queries, so some of them takes 45-60 seconds. After a restart (CPU usage less than 80%), it takes 11-20 seconds for the same Query.


How can I fix this? I've read online that affinity masks can adjust the CPU usage, but the Affinity settings are disabled. I cannot change them. Is this because I only have 1 processor?

There are plenty of tricks to do with the queries themselves, but our websites and services are quite big, and there is simply too much to change.

Most of them are already pretty well optimized.


I cannot keep restarting the SQL-Service, even though it only takes 2 seconds, because we have an alarm service that allows people to call in and record a message, a selected group will then be called and hear the recorded message.

This system is used by hundreds Search and Rescue teams, and if the SQL-Service restarts during an alarm, it will terminate and the person that called it in will not be notified.


I have searched all over the place, but found nothing except for stuff about "Affinity Masks", which I cannot change.

There must be a way to clear out the CPU cache, without terminating current queries... right?


SQL: Microsoft SQL Server 11.0.2100.60  OS: Windows Server 2012 x64  Processor: 2.30 GHz  RAM: 4.00 GB  

mysqldump freezing on a specific table

Posted: 08 Aug 2013 02:57 PM PDT

I dumped a database (sys_data) which is very big (800GB, all data in one ibdata file) from a remote server. But the dump was blocked at a table (tb_trade_376). My dump command:

mysqldump -uxx -pxx -h192.168.1.xxx --single-transcation sys_data > /home/sys_data.sql  

When the dump was blocked:

show processlist;  5306612 | root | 192.168.1.161:57180 | sys_data      | Query  | 23955 | Sending data | SELECT /*!40001 SQL_NO_CACHE */ * FROM `tb_trade_376`  

On the other hand I can dump the table tb_trade_376 successfully if I just dump the table only.

mysqldump -uxx -pxx -h192.168.1.xxx \    --single-transcation sys_data tb_trade_376 > /home/tb_trade_376.sql  

This works well and quickly! The table tb_trade_376 has about 700,000-800,000 rows.

What is the next step in investigating why I can't dump the whole database? How can I make it work?

In place upgrade from MySQL 5.5 to 5.6.11 removes all users from user table

Posted: 08 Aug 2013 01:56 PM PDT

On Windows, I upgraded from 5.1 to 5.5 no problem.

  1. Copied my 5.1 data folder into my 5.5 instance
  2. Started mysqld skipping grants
  3. Ran mysql_upgrade

All good, but going from 5.5 to 5.6:

  1. Copied 5.5 data folder to 5.6 instance
  2. Started mysqld skipping grants
  3. Ran mysql_upgrade

but I get:

C:\Users\QAdmin>mysql_upgrade  Looking for 'mysql.exe' as: C:\Program Files\MySQL\MySQL Server 5.6\bin\mysql.exe  Looking for 'mysqlcheck.exe' as: C:\Program Files\MySQL\MySQL Server 5.6\bin\mysqlcheck.exe  Running 'mysqlcheck' with connection arguments: "--port=3306"  Running 'mysqlcheck' with connection arguments: "--port=3306"  mysql.user_info                                    OK  Running 'mysql_fix_privilege_tables'...  Running 'mysqlcheck' with connection arguments: "--port=3306"  C:\Program Files\MySQL\MySQL Server 5.6\bin\mysqlcheck.exe: Got error: 1130: Host 'localhost' is not allowed to connect to this MySQL server when trying to connect  FATAL ERROR: Upgrade failed  

If I look at the mysql.user table it is completely empty.

  • Has anyone seen this or know what is going on?
  • During the "upgrade" the user table gets erased and when it tries to connect it can't?

Thanks.

SSRS 2008 R2 setup issue

Posted: 08 Aug 2013 04:57 PM PDT

I have installed SSRS 2008 R2 on my desktop and server. When I hit the reports link on my desktop

http://mypc/Reports_mypc/Pages/Folder.aspx  

all I get to see is this home page of the desktop ssrs instance

I cant create a new folder or data source or anything of the sort

On the server where I am attempting to set up SSRS 2008 R2, all I get is a white screen that shows the virtual folder name in large fonts, followed by the version of the reporting services server on the next line. This is not leaving me any clues as to what needs to be fixed. On both pcs I am using the credentials of the local admin. Any clues on what needs to be fixed?

What is the difference between int(8) and int(5) in mysql?

Posted: 08 Aug 2013 11:56 AM PDT

I found out, that if you have a field defined as INT(8) without ZEROFILL it will behave exactly as INT(5)

in both cases the maximum value is

−2,147,483,648 to 2,147,483,647, from −(2^31) to 2^31 − 1  

or do i miss something?

I found this Question: http://dba.stackexchange.com/a/370/12923

The (5) represents the display width of the field. From the manual, it states:

The display width does not constrain the range of values that can be stored in the column. Nor does it prevent values wider than the column display width from being displayed correctly. For example, a column specified as SMALLINT(3) has the usual SMALLINT range of -32768 to 32767, and values outside the range permitted by three digits are displayed in full using more than three digits.

The display width, from what I can tell, can be used to left-pad numbers that are less than the defined width. So 00322, instead of 322. TBH, I've never used it.

But it doesn't affect the storage size of the column. An int will take up more space than a smallint.

so there seems to be no difference then.

MySQL table relations, inheritance or not?

Posted: 08 Aug 2013 05:57 PM PDT

Im building a micro CMS. Using Mysql as RDMS, and Doctrine ORM for mapping.

I would like to have two types of pages. Static Page, and Blog Page. Static page would have page_url, and page_content stored in database. Blog page would have page_url, but no page_content. Blog would have Posts, Categories...

Lets say I have route like this:

/{pageurl}  

This is page, with page url that can be home, or news, or blog... That page can be either Static page, and then I would joust print page_content. But it can also be Blog Page, and then I would print latest posts as content.

How should I relate these Static Page and Blog Page tables? Is this inheritance, since both are pages, with their URL, but they have different content? Should I use inheritance, so that both Static and Blog page extends Page that would have page_url? Or should I made another table page_types and there store information about available page types?

Error Authenticating Proxy Account when Executing SSIS job

Posted: 08 Aug 2013 01:53 PM PDT

I have a SQL Server instance that runs 5 scheduled tasks each night, each of which run SSIS packages.

These packages have been running for years and the associated steps run via a Proxy Account (PackageExecutor). PackageExecuter is associated with a SQL Credential that was a former domain admin account.

Soon, the domain associated with this admin account is going to be shutdown. I have to use a new account, on a new domain, as the admin account associated with my roxy, PackageExecutor. When I created a new Credential for the new Admin account and associated it with PackageExecutor, I started to get the following error when I tried to run one of my SQL jobs as a test:

Unable to start execution of step 1 (reason: Error authenticating  proxy *Domain\Admin_Account*@*fully.qualified.domain.com*, system  error: Logon failure: unknown user name or bad password.).  The step  failed.  

If I'm understanding this reasonably explicit error, what it's telling me is that the Credential accounts, associated with my proxy is in correct. How do I validate this?

I know that this account is legitimate-- I've already associated it with every associated server group, I've made it a sysadmin user on the server.

What could be causing this problem?

To be clear, I haven't mis-typed the account name or the password associated with the Proxy Credential. However, when I entered the account name Domain\Admin_Account and clicked the Check Names button, SQL Server automatically transformed the User ID to the fully-qualified version. I'm not sure if this has anything to do with this problem.

I'm at a bit of a loss. I've given my credential account full access to everything that I can think of. What might I need to do to get this to work?

UPDATE

Sorry, one more quick mention. I've found this MSDN kb article. Resolution method #1 is what I've been doing for years. The others don't seem to apply, or I'm missing something. Any tips or clarification would be beneficial.

No comments:

Post a Comment

Search This Blog