Friday, June 21, 2013

[how to] Why aren't my first few Postgres WAL segments being archived?

[how to] Why aren't my first few Postgres WAL segments being archived?


Why aren't my first few Postgres WAL segments being archived?

Posted: 21 Jun 2013 09:03 PM PDT

The pg_xlog directory for my cluster (on master) looks like this:

-rw------- 1 postgres postgres  16M Jun 21 21:42 000000010000000000000001  -rw------- 1 postgres postgres  16M Jun 21 22:42 000000010000000000000002  -rw------- 1 postgres postgres  16M Jun 21 23:42 000000010000000000000003  -rw------- 1 postgres postgres  16M Jun 22 00:42 000000010000000000000004  -rw------- 1 postgres postgres  16M Jun 22 01:42 000000010000000000000005  -rw------- 1 postgres postgres  16M Jun 22 01:49 000000010000000000000006  -rw------- 1 postgres postgres  16M Jun 22 01:55 000000010000000000000007  -rw------- 1 postgres postgres  16M Jun 22 02:05 000000010000000000000008  -rw------- 1 postgres postgres  16M Jun 22 02:30 000000010000000000000009  -rw------- 1 postgres postgres  16M Jun 22 02:50 00000001000000000000000A  

The pg_xlog/archive_status directory looks like this:

-rw------- 1 postgres postgres 0 Jun 22 01:49 000000010000000000000006.done  -rw------- 1 postgres postgres 0 Jun 22 01:55 000000010000000000000007.done  -rw------- 1 postgres postgres 0 Jun 22 02:05 000000010000000000000008.done  -rw------- 1 postgres postgres 0 Jun 22 02:30 000000010000000000000009.done  

When I first turned on archiving 006 was the current segment, but why weren't 001-005 archived anyway? Last time I set up archiving on an existing cluster (which was yesterday - I've been experimenting), all log segments were archived when I turned archiving on, despite the pg_xlog directory containing more than 50 previous log segments.

Relevant settings:

wal_level = hot_standby  archive_mode = on  wal_keep_segments = 2048  # I know, right :)  archive_timeout = 3600  archive_command = 'rsync -aq --delay-updates --ignore-existing %p postgres@db-slave:/mnt/pgsql/'  

Note: I did change all of these settings (including was_level, which was minimal previously) when I turned on archiving earlier. Could it be that my wal level wasn't sufficient to warrant archiving, or because there is a mismatch between the prior level and the current level?

Cannot start sdows.ear module on WebLogic 12c

Posted: 21 Jun 2013 07:52 PM PDT

I've setup a CentOS 6.3 x64 box and OpenJDK 1.6, and installed Oracle 11g2 and WebLogic 12c. I'm trying to deploy a WFS to the box, as exploded EAR.

Consulted the Oracle hands on lab document, the 2nd lab here, I've tried to deploy the sdows.ear. But have got weird results. The deployment seems OK, console web pages reports no error, and shows all successful messages. However, all I see in the WFS URL are errors:

<ogc:ServiceExceptionReport version="1.2.0" xsi:schemaLocation="http://www.opengis.net/ogc http://localhost:8888/examples/servlet/xsds/OGC-exception.xsd">      <ogc:ServiceException code="WFS-1042">Exception during processing request</ogc:ServiceException>  </ogc:ServiceExceptionReport>   

When I check the back end server output I see:

........  <WS data binding error>Ignoring element declaration {http://svrproxy.ws.spatial.oracle/}makeSpatialRequestElement because there is no entry for its type in the JAXRPC mapping file.  <WS data binding error>Ignoring element declaration {http://svrproxy.ws.spatial.oracle/}makeSpatialRequestResponseElement because there is no entry for its type in the JAXRPC mapping file.  [oracle.spatial.ws.svrproxy.SdoWsService, INFO] Ws Configure file is /oracle/app/oracle/11.2.0/dbhome_1/md/jlib/sdowsDD.ear/sdows.warWEB-INF/conf/WSConfig.xml  [oracle.spatial.ws.WSProperties, ERROR] error parsing config file: /oracle/app/oracle/11.2.0/dbhome_1/md/jlib/sdowsDD.ear/sdows.warWEB-INF/conf/WSConfig.xml (No such file or directory)  [oracle.spatial.ws.WSProperties, ERROR] error closing reader: null  [oracle.spatial.ws.WSProperties, ERROR] Oracle Spatial WS Server could not set up configuration  parameters due to exception: null  ........  

I've also tried to deploy the module directly using the sdows.ear file, but similar error occurs:

........  <WS data binding error>Ignoring element declaration {http://svrproxy.ws.spatial.oracle/}makeSpatialRequestElement because there is no entry for its type in the JAXRPC mapping file.  <WS data binding error>Ignoring element declaration {http://svrproxy.ws.spatial.oracle/}makeSpatialRequestResponseElement because there is no entry for its type in the JAXRPC mapping file.  [oracle.spatial.ws.svrproxy.SdoWsService, INFO] Ws Configure file is nullWEB-INF/conf/WSConfig.xml  [oracle.spatial.ws.svrproxy.SdoWsService, FATAL] java.lang.NullPointerException  ........  

What have I done wrong?

In Job Step > Properties, only see two tabs: Package and Configuration

Posted: 21 Jun 2013 04:34 PM PDT

I am trying to set a job step that executes a package to use the 32-bit runtime, as it uses the SQL 10 driver against a SQL 2000 database. The job is running on SQL Server 2012. I see all kinds of examples of how to do this, where the job step properties page has 9 tabs (Set Values, Verification, Command Line, General, Configurations, Command Files, Data Sources, Execution Options and Logging). Execution Options has a checkbox for 32-bit mode.

When I look at my job step, logged in to the server as an Admin, running SSMS as administrator, all I see are two tabs: Package and Configuration. I deployed the packages using the Project Deployment method. How do I get those other tabs to show up?

Update existing SQL Server 2008 table via Flat File using SSIS

Posted: 21 Jun 2013 01:50 PM PDT

I am trying to update existing records or add any new ones to a SQL Server table using a flat file. Also I need it work as an SSIS package. I don't get an option to do a source query like I do for Excel, etc. when setting up the import task and even when I was trying to use Excel my options seemed limited to simple selects for the source query.

I really need to be able to run a full script that checks the first column (which is my key), updates existing records, then adds any new ones. Maybe I am taking the wrong approach here altogether, but I thought there was a way to do this with SSIS. Can someone point me in the right direction? If it were two tables and I didn't have to use SSIS I would just write the sql and be done with it. It's really the bridge with using flat files and SSIS (automating the import) that I'm looking for here.

Just to give a quick background of what I ultimately am trying to accomplish: We do exports of our data into flat files and those will go to a ftp server. We then have the customer import from those files into table representations of them so they can report against them.

How to gracefully handle a source database restore from backup in a data warehouse

Posted: 21 Jun 2013 01:13 PM PDT

We are facing a challenging situation with our data warehouse source databases.

Frequently these sources databases will be restored from backups. There is a high turnover of data entry persons who use the system and they make many mistakes. So the business will just restore from a backup and start over.

But at this point, the data in the data warehouse will have already been processed and needs to be corrected. There could be thousands of rows of fact data which is no longer valid. Is there an appropriate design pattern to handle this scenario?

For example, would you need a way to rebuild the data warehouse from scratch? Would you attempt to restore a backup of the data warehouse and then build ETLs to synchronize? Would you delete data from your fact tables and then re-insert?

Mitigating MySQL replication delay in single master/multiple slave setup

Posted: 21 Jun 2013 01:10 PM PDT

For a particular web application that requires high availability I am thinking of scaling out by setting-up MySQL replication with one "write" master and multiple "read" slaves (as explained here http://dev.mysql.com/doc/refman/5.1/en/replication-solutions-scaleout.html).

The only issue that I have is how to mitigate the replication delay between the master and the slaves? For example, let's say that the user posts an item (writing to the master), and very quickly thereafter wants to view his posted items (reading from one of the slaves). What efficient solution can I put in place to make sure that the a read operation from one of the slaves will always have all the content of any previously completed write operation to the master?

My MySQL server is charged? [closed]

Posted: 21 Jun 2013 12:26 PM PDT

What factors can help to know if a server mysql is charged or not ?

It's about the number of users connected, CPU, ..??

Thanks.

How to calculate sigma > 1 in MySQL

Posted: 21 Jun 2013 09:47 AM PDT

I have a big set of data and calculating AVG and STD already. But as STD is only 1 sigma and 31% of all data points are outside of avg +/- std, I want to know, if there is any good way to STD with sigma > 1. Sigma 2 or 3 would be enough.

Thanks

Changing the representation of NULLs in pg_dump plaintext output

Posted: 21 Jun 2013 06:44 PM PDT

I'm trying to convert a large-ish Postgres DB (500 GB) to SQL Server 2012. After investigating a few third-party tools and being disappointed in features, performance, or both, I started pursuing a simple pg_dump/bulk import solution. Things looked promising until I realized that pg_dump represents NULLs in plaintext as "\N", which causes the bulk insert to vomit in cases of type mismatch. Even if I were to automate the pg_dump process to produce a single file per table, some of the individual tables involved are very large (20-50 GB) and performing a comprehensive find-replace using even fast file editing options in linux, or a Perl script, add too much overhead to the time required for the import/export.

I'm hoping there's a way to modify the NULL representation in the pg_dump output that I'm not aware of, or failing that, to get some recommendations for alternative approaches to this process in terms of tools or strategies. Thanks in advance for your help.

Formatting T-SQL in SSMS 2012

Posted: 21 Jun 2013 03:56 PM PDT

According to this Microsoft document:

http://msdn.microsoft.com/en-us/library/ms174205.aspx

I am supposed to be able to use ctrl+K then ctrl+D to format my SQL documents in SQL Server Management Studio 2012 but when I use that combo I get the error:

The key combination (Ctrl+K, Ctrl+D) is bound to command (Format Document) which is not currently available.

I am trying to make modifications to an existing SQL document that has no formatting to it at all which makes it extremely difficult to read. Does anyone know how to make the Format Document command available so I can have SQL format this code for me?

Query for master in Postgres replication

Posted: 21 Jun 2013 12:40 PM PDT

I'm trying to find out if there is a way to query for the master, from a slave PostgreSQL server that has been set up with server replication.

From a slave, I can:

SELECT pg_is_in_recovery()  

And this will give me a 't' result if I'm on a slave and an 'f' result on the master, which is step one.

Next, I'd like to run a query that gives me some information about the master that it's replicating. Preferably an IP address or hostname.

For the record, I can query the master with:

SELECT * from pg_stat_replication  

And this will give me information about slaves. I am hoping that there is a reciprocal method for querying a slave.

Is this possible? If so, how?

Replication Options with data subsets

Posted: 21 Jun 2013 09:59 AM PDT

We have an application that makes of of a SQL Server 2012 (ServerA) database. Some data that we need to display, though, comes from another database, but it's SQL Server 2008 (ServerSOURCE). The database is also on a separate server.

The plan is to create a replication database on the same server as ServerA, called ServerREP, and replicate data from ServerSOURCE into ServerREP. But we only want certain tables from ServerSOURCE. It's a small subset of the tables. ServerSOURCE has hundreds of tables. We only need around 10.

Additionally, the data in the tables could be reduced. For example, say we have a Person table, which has a PersonTypeId. We only want rows where PersonTypeId = x.

Can this be achieved with replication? Can we get subsets of data from a subset of tables from the source, into our new replicated database? Would it be something like creating a VIEW on the source, and replicating the results of that view, as a table in the replicated database?

Issues starting Postgres server [closed]

Posted: 21 Jun 2013 09:59 AM PDT

I'm trying to go through the book "7 Databases in 7 Weeks" and I'm completely stuck at starting a Postgres server.

My current issue is that when I run pg_ctl start I get the following error:

LOG:  database system was shut down at 2013-06-12 17:18:24 PDT  LOG:  invalid magic number 0000 in log file 0, segment 1, offset 0  LOG:  invalid primary checkpoint record  LOG:  invalid secondary checkpoint record  PANIC:  could not locate a valid checkpoint record  LOG:  startup process (PID 1454) was terminated by signal 6: Abort trap  LOG:  aborting startup due to startup process failure  

I have no idea how to go about fixing this. I'm running PostgreSQL 9.2.4 on Mountain Lion 10.8.3.

Copy a SQL Server database from one server to another on demand

Posted: 21 Jun 2013 08:29 PM PDT

I have two servers, Prod and Test, both running SQL Server 2008 RTM.

Is it possible to have a PowerShell- or VBScript that could copy a database from one server to another?

The copy should of course include all contents of the database, with the exception of privileges so I don't lock myself out from editing the destination.

The destination database should be cleared/reset or completely overwritten, so both source and destination are identical.

Additionally, the connection to the source should be read-only and (if possible) only able to initiate the copy process without actually having access to the data.

I am slightly familiar with PowerShell, so if this only means connecting and starting a task it should be doable. Or do I have to look for advanced solutions?

Thank you.

Migration to databases Domain Users do not have access

Posted: 21 Jun 2013 12:34 PM PDT

I migrated databases to new servers, however the applications that were previously used with the databases are failing to load. I have changed the connections and etc. The jobs also seem to be failing. I have a domain account who is the job owner. However, when I try to execute the job under my User name i get the following error:

Executed as user: NT AUTHORITY\SYSTEM. Login failed for user.....[SQLSTATE 28000) (Error 18456).

Is this related to Domain Users not having appropriate read and write access to the database. Also how would I give All domain users permissions to execute stored procedures.

Creating a global temp table in MySQL

Posted: 21 Jun 2013 10:34 AM PDT

I am working on a MySQL stored procedure.

I need to create a temp table and I want to access this temp table whenever I execute this stored procedure.

I know we can't access the temp table in another request in MySQL.

Is there a way to create a temp table like globally, or how can I access the temp table across the multiple requests?

MongoDB problems recovering a member of the replica set

Posted: 21 Jun 2013 07:35 PM PDT

I have a sharded database with 2 replica sets (RS1 and RS2) each one of the RSs with 2 servers. I had a problem yesterday with one member of the RS2, the mongod instance crashed throwing an error. After that I tried to recover the member making it sync with the other member of the replica set (it took a long time to finish the sync) and then I'm getting the same error again:

Tue May  7 12:37:57.023 [rsSync]   Fatal Assertion 16233  0xdcf361 0xd8f0d3 0xc03b0f 0xc21811 0xc218ad 0xc21b7c 0xe17cb9 0x7f57205f2851 0x7f571f99811d   /usr/bin/mongod(_ZN5mongo15printStackTraceERSo+0x21) [0xdcf361]   /usr/bin/mongod(_ZN5mongo13fassertFailedEi+0xa3) [0xd8f0d3]   /usr/bin/mongod(_ZN5mongo11ReplSetImpl17syncDoInitialSyncEv+0x6f) [0xc03b0f]   /usr/bin/mongod(_ZN5mongo11ReplSetImpl11_syncThreadEv+0x71) [0xc21811]   /usr/bin/mongod(_ZN5mongo11ReplSetImpl10syncThreadEv+0x2d) [0xc218ad]   /usr/bin/mongod(_ZN5mongo15startSyncThreadEv+0x6c) [0xc21b7c]   /usr/bin/mongod() [0xe17cb9]   /lib64/libpthread.so.0(+0x7851) [0x7f57205f2851]   /lib64/libc.so.6(clone+0x6d) [0x7f571f99811d]  Tue May  7 12:37:57.155 [rsSync]    ***aborting after fassert() failure      Tue May  7 12:37:57.155 Got signal: 6 (Aborted).    Tue May  7 12:37:57.159 Backtrace:  0xdcf361 0x6cf729 0x7f571f8e2920 0x7f571f8e28a5 0x7f571f8e4085 0xd8f10e 0xc03b0f 0xc21811 0xc218ad 0xc21b7c 0xe17cb9 0x7f57205f2851 0x7f571f99811d   /usr/bin/mongod(_ZN5mongo15printStackTraceERSo+0x21) [0xdcf361]   /usr/bin/mongod(_ZN5mongo10abruptQuitEi+0x399) [0x6cf729]   /lib64/libc.so.6(+0x32920) [0x7f571f8e2920]   /lib64/libc.so.6(gsignal+0x35) [0x7f571f8e28a5]   /lib64/libc.so.6(abort+0x175) [0x7f571f8e4085]   /usr/bin/mongod(_ZN5mongo13fassertFailedEi+0xde) [0xd8f10e]   /usr/bin/mongod(_ZN5mongo11ReplSetImpl17syncDoInitialSyncEv+0x6f) [0xc03b0f]   /usr/bin/mongod(_ZN5mongo11ReplSetImpl11_syncThreadEv+0x71) [0xc21811]   /usr/bin/mongod(_ZN5mongo11ReplSetImpl10syncThreadEv+0x2d) [0xc218ad]   /usr/bin/mongod(_ZN5mongo15startSyncThreadEv+0x6c) [0xc21b7c]   /usr/bin/mongod() [0xe17cb9]   /lib64/libpthread.so.0(+0x7851) [0x7f57205f2851]   /lib64/libc.so.6(clone+0x6d) [0x7f571f99811d]  

Any idea of why this may be happening? How can I make this server sync and work? My last surviving server is now running as secondary, is there a way to make it primary for a while to get the data out of it?

Thanks in advance!

MySQL server crashed.

Posted: 21 Jun 2013 03:35 PM PDT

Help! I managed to crash MySQL last night. I am on a Mac using the native version that came with Mountain Lion. I was upgrading from 5.5 to 5.6. I have followed instructions in this forum to delete the installation, but trying to re-install 5.5 says that there is a newer version and won't install. Trying to install 5.6 fails. I found this error in the console:

4/21/13 10:16:56.000 PM kernel[0]: exec of /Volumes/mysql-5.6.11-osx10.7-x86/mysql-5.6.11  osx10.7-x86.pkg/Contents/Resources/preflight denied since it was quarantined by TextEdit    and created without user consent, qtn-flags was 0x00000006  

Help me please ?? I am stuck and in a world of hurt and despair.

SQL Server replication conflicts after migration from 2000 to 2008

Posted: 21 Jun 2013 06:35 PM PDT

I got a suggestion over at Stackoverflow to post here....greatful for any and all help.

Please bear with me I think this might take a while to explain. For many years now my company has hosted a solution for a client involving a complex web application with smaller mobile solution consisting of IIS 6 for the web app, SQL Server 2000 on its own server and Visual Studio 2005 Pocket PC app replicating with SQL Server via Merge Replication. This whole time the mobile solution has been very solid and did not require many updates so we have replicated with sscesa20.dll the entire time.

We recently migrated this entire solution as follow:

  • Web Box - New Win Server 2008 R2 running IIS 7.5
  • SQL Server Box - New Win Server 2008 R2 running SQL Server 2008
  • Mobile app - small updates converted to Visual Studio 2008 and Windows for Mobile 5

The new web box received the 64 bit version of SQL Server Compact 3.5 tools and we now call sqlcesa35.dll from the mobile device to perform merge replication.

The basic idea of the entire process is that mobile field users get assigned safety inspections to perform on buildings. When a facility in the system needs an inspection an inspection record is created via the web app in the DB. A status flag is set such that the HOST_NAME() is utilized to make sure only records for a given inspector with this particular status will let them show up on their mobile device. The user can synch multiple times in the field sending their data up to the SQL Server/web app and receive more inspections down or other updates such as look up table data...typical merge replication here and has been working great for years. Once the field user changes the status of the inspection, it will travel from mobile device to SQL Server database and be removed from their iPaq. The inspection has additional work flow on the web app from here on out.

Now on to the problem. We migrated everything publishing the exact same subset of tables with the same joins/filters. Same settings on the publication as far as I can tell are the same. However; when a user gets a new inspection down to the hand held for the very first time, enters data, then synchronizes back to the database every row has a conflict. Since we have default conflict resolution the publisher wins and the data collected in the field it lost. The inspection now looks blank just as it did when it first came down to the mobile device. If the user syncs again with or without changes on the mobile (subscriber) all is well. Any future changes from the mobile device are intact.

It is as if the web/db data is newer then the hand held data. I am 100% sure it is not. I have looked at table triggers, web app logic, etc. We were very careful not to include any application changes to DB/web app/mobile app with respect to data manipulation during this migration.

Here is a summary of the order of operation:

New row created in the database >> Mobile user receives data >> mobile user updates data >> synchronizes - data is lost. Conflicts show up for all data lost.

From here on out any additional mobile changes are captured. Merge replication works in both directions flawlessly.

Thanks for taking the time to read please help. I am stuck after 3 days.

How to connect to a Database made by Oracle SQL Database?

Posted: 21 Jun 2013 05:35 PM PDT

So I am fairly new at this, so if you could keep that in mind in your answers, it would be much appreciated. I installed Oracle SQL Database on my Windows PC. It came in two zip files. I installed it and the online portion of it works fine. I can login with Username: sys and Password: **. What I am trying to do is connect to this newly created database on another computer through SQL Developer. I have read that in order to do this, you need to change the hostname of the Database from "localhost" to an IP Address. How do I do that and is there anything else I need to do to make this work?

I also found this LINK. Is this something I should do? I do not have a Domain though.

listener.ora

# listener.ora Network Configuration File:    C:\app\hyderz\product\11.2.0\dbhome_2\network\admin\listener.ora  # Generated by Oracle configuration tools.    SID_LIST_LISTENER =  (SID_LIST =   (SID_DESC =     (SID_NAME = CLRExtProc)     (ORACLE_HOME = C:\app\hyderz\product\11.2.0\dbhome_2)     (PROGRAM = extproc)     (ENVS = "EXTPROC_DLLS=ONLY:C:\app\hyderz\product\11.2.0\dbhome_2\bin\oraclr11.dll")   )  )    LISTENER =  (DESCRIPTION_LIST =    (DESCRIPTION =      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1522))      (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1522))    )  )    ADR_BASE_LISTENER = C:\app\hyderz  

tnsnames.ora

# tnsnames.ora Network Configuration File:   C:\app\hyderz\product\11.2.0\dbhome_2\network\admin\tnsnames.ora  # Generated by Oracle configuration tools.    LISTENER_ORCL =    (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1522))      ORACLR_CONNECTION_DATA =  (DESCRIPTION =  (ADDRESS_LIST =    (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1522))  )  (CONNECT_DATA =    (SID = CLRExtProc)    (PRESENTATION = RO)  )  )    ORCL =  (DESCRIPTION =  (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1522))  (CONNECT_DATA =    (SERVER = DEDICATED)    (SERVICE_NAME = orcl)  )  )  

Backup plan for MySQL NDB cluster databse not innoDB

Posted: 21 Jun 2013 01:34 PM PDT

I have a Database which will grow more than 250GB all data is in NDB engine(2 datanodes) and no other mysql engine used for data store.

  • What are the best approaches or plans for MySQL Cluster NDB database backup.

  • Is MySQL Enterprise Backup( part of MySQL Cluster CGE) or any other tool does backup for NDB?

  • Can a beginner like me do these backup of NDB easily or needs specialised skills?

Kind regards,

Easiest Way to Move 100 Databases

Posted: 21 Jun 2013 09:10 PM PDT

I need to move about 150 databases from one server to another server.

  1. SQL Server 2008 Web Edition
  2. SQL Server 2012 Preview (in a different datacenter - East Coast Azure)

I was planning on moving them one at a time using RedGate Packager, however this will take a while.

Is there a faster and easier way?

How to optimize a log process in MySQL?

Posted: 21 Jun 2013 02:34 PM PDT

In my project, I have about 100.000 users and can't control their behavior. Now, what I would like to do is log their activity in a certain task. Every activity, is one record which includes columns like user_id and some tag_id's.

The problem I have, is that these tasks in some cases can go up to 1.000.000 per year per user. So if I would store all these activities in one table. that would obviously become very big (=slow).

What is best to do here? Create a single table per user (so I have 100.000 log tables) or put all these activities in one table? And what kind of engine should I use?

One important thing to note: Although i simplified the situation a bit the following doesn't look normal, but users can also change values in these tables (like tag_id's).

MySQL: Lock wait timeout exceeded

Posted: 21 Jun 2013 08:35 PM PDT

I have a developer that has been trying to alter a large table (~60M rows). Via LiquidBase/JDBC they're running

ALTER TABLE foo DROP FOREIGN KEY fk_foo_1;  

Today while it was running I checked in on it periodically; everything looked normal, the query was running, in state "copying to tmp table", I could see the temp table on disk getting larger and larger (still plenty of free space). I could see other processes blocked waiting for this table to be unlocked. Finally after about 4.5 hours, they got the "lock wait timeout exceeded; try restarting transaction" error. This is actually the 2nd time they've tried, and it seems to fail right about when I would expect it to complete.

innodb_lock_wait_timeout is set to the default 50, I can't imagine it would run for so long to be affected by this. No errors logged, no deadlocks or other weirdness seen in 'show engine innodb status'. Can anyone help me with other ideas? I'm fairly stumped on this one.

thanks

postgresql: how do I dump and restore roles for a cluster?

Posted: 21 Jun 2013 12:14 PM PDT

Where are roles stored in a cluster, and how do I dump them?

I did a pg_dump of a db and then loaded it into a different cluster, but I get a lot of these errors:

psql:mydump.sql:3621: ERROR:  role "myrole" does not exist   

So apparently the dump of my db does not include roles. I tried dumping the 'postgres' db, but I don't see the roles there either.

Do I need to use pg_dumpall --roles-only ?

Postgresql versions 8.4.8 and 9.1.4 OS: Ubuntu 11.04 Natty

Converting RTF in a text column to plain text in bulk

Posted: 21 Jun 2013 05:29 PM PDT

I have a legacy system with about 10 million rows in a table. In that table there is a column of type text, most of them are standard text but about 500 thousand rows have RTF markup in them. I need to convert the RTF formatted text in to plain text.

My current method is I have a C# program that loads the query in to a DataTable using a SqlDataAdapter and uses the winforms RichTextBox control to do the conversion.

void bw_DoWork(object sender, DoWorkEventArgs e)  {      count = 0;        rtbRTFToPlain = new RichTextBox();        using (SqlDataAdapter ada = new SqlDataAdapter("select note_guid, notes from client_notes", Globals.SQLConnectionString))      using(SqlCommandBuilder cmb = new SqlCommandBuilder(ada))      {          DataTable dt = new DataTable();          ada.UpdateCommand = cmb.GetUpdateCommand();            ada.Fill(dt);            int reportEvery = dt.Rows.Count / 100;          if (reportEvery == 0)              reportEvery = 1;          foreach (DataRow row in dt.Rows)          {              if (count % reportEvery == 0)                  bw.ReportProgress(count / reportEvery);                try              {                  if (((string)row["notes"]).TrimStart().StartsWith("{") == true)                  {                      rtbRTFToPlain.Rtf = (string)row["notes"];                      row["notes"] = rtbRTFToPlain.Text;                  }              }              catch              {              }                count++;            }          bw.ReportProgress(100);            this.Invoke(new Action(() =>               {                  this.ControlBox = false;                  this.Text = "Updating database please wait";              }));          ada.Update(dt);      }  }  

This is working great for small tables, however this is the first time I had to run it on a table with such a large data-set (some of the rtf files can be several megabytes in size with embedded pictures) and I am getting OutOfMemory errors with my C# program.

I know I can chunk my query down in to a smaller batches, but I wanted to see if there is a better way that I was missing to strip off RTF formatting.

Should I just do the same thing as my current solution but only query out data smaller chunks at a time, or is there a better way to do this?

Designing a database for a site that stores content from multiple services?

Posted: 21 Jun 2013 04:35 PM PDT

I'm building a site that implements David Allen's Getting Things Done that pulls in your email, Facebook newsfeed, tweets from those you follow on Twitter, and more services are planned. The problem is that I'm not a DBA, and I'm not sure how to design the database so that as I add features to the site, I won't have to artificially corrupt people's raw data for the purposes of storing it (for example, I want to add the ability to get RSS feeds sometime in the future, but I'm not sure how I'd do that without making a mess).

I've put down my initial ideas using DBDesigner 4, below, you'll find the diagram and the SQL.

A few notes to help clarify clarify things.

  • The Accounts table is for storing authentication tokens and such for facebook, twitter, and such.
  • The messages table is incomplete.
  • The password fields in emailconfiguration and users are encrypted, users with a one-way hash, emailconfiguration with a two-way.
  • I'm using a MySQL database using the InnoDB storage engine on Amazon RDS.
  • Each project may have one context associated with it.
  • Each message may have a project and context, but it's not required.
  • The imap, smtp, and pop3 tables exist to remove duplication within email configuration.
  • queries to this database are generated by Korma, a clojure library.

Can someone please point me in the right direction? I'd also be willing to look at using a NoSQL database if suggested. Thank you for your time and consideration.

site database schema

Here's the SQL create script just in case anyone wants to see it.

CREATE TABLE Pop3 (    domain VARCHAR NOT NULL,    host VARCHAR NULL,    port INTEGER UNSIGNED NULL,    ssl BOOL NULL,    PRIMARY KEY(domain)  )  TYPE=InnoDB;    CREATE TABLE Imap (    domain VARCHAR NOT NULL,    Host VARCHAR NULL,    port INTEGER UNSIGNED NULL,    ssl BOOL NULL,    PRIMARY KEY(domain)  )  TYPE=InnoDB;    CREATE TABLE users (    Username VARCHAR NOT NULL AUTO_INCREMENT,    email VARCHAR NULL,    password_2 VARCHAR NULL,    activation VARCHAR NULL,    is_active BOOL NULL,    PRIMARY KEY(Username)  )  TYPE=InnoDB;    CREATE TABLE smtp (    domain VARCHAR NOT NULL,    host VARCHAR NULL,    port INTEGER UNSIGNED NULL,    ssl BOOL NULL,    PRIMARY KEY(domain)  )  TYPE=InnoDB;    CREATE TABLE projects (    id INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,    users_Username VARCHAR NOT NULL,    name VARCHAR NULL,    description TEXT NULL,    context INTEGER UNSIGNED NULL,    PRIMARY KEY(id, users_Username),    INDEX projects_FKIndex1(users_Username),    FOREIGN KEY(users_Username)      REFERENCES users(Username)        ON DELETE NO ACTION        ON UPDATE NO ACTION  )  TYPE=InnoDB;    -- ------------------------------------------------------------  -- This is the table where access info for facebook, twitter, and others is stored.  -- ------------------------------------------------------------    CREATE TABLE Accountsi (    id INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,    users_Username VARCHAR NOT NULL,    owner INTEGER UNSIGNED NULL,    service VARCHAR NULL,    username VARCHAR NULL,    send INTEGER UNSIGNED NULL,    receive INTEGER UNSIGNED NULL,    info TEXT NULL,    PRIMARY KEY(id, users_Username),    INDEX Accountsi_FKIndex1(users_Username),    FOREIGN KEY(users_Username)      REFERENCES users(Username)        ON DELETE NO ACTION        ON UPDATE NO ACTION  )  TYPE=InnoDB;    CREATE TABLE EmailConfiguration (    id INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,    users_Username VARCHAR NOT NULL,    owner INTEGER UNSIGNED NOT NULL,    address VARCHAR NULL,    psswd VARCHAR BINARY NULL,    domain VARCHAR NULL,    PRIMARY KEY(id, users_Username),    INDEX EmailConfiguration_FKIndex1(users_Username),    FOREIGN KEY(users_Username)      REFERENCES users(Username)        ON DELETE NO ACTION        ON UPDATE NO ACTION  )  TYPE=InnoDB;    CREATE TABLE Messages (    id INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,    users_Username VARCHAR NOT NULL,    message_id VARCHAR NULL,    user_id VARCHAR NULL,    account INTEGER UNSIGNED NULL,    service VARCHAR NULL,    project INTEGER UNSIGNED NOT NULL,    context INTEGER UNSIGNED NOT NULL,    PRIMARY KEY(id, users_Username),    INDEX Messages_FKIndex1(users_Username),    FOREIGN KEY(users_Username)      REFERENCES users(Username)        ON DELETE NO ACTION        ON UPDATE NO ACTION  )  TYPE=InnoDB;    CREATE TABLE context (    id INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,    projects_id INTEGER UNSIGNED NOT NULL,    projects_users_Username VARCHAR NOT NULL,    users_Username VARCHAR NOT NULL,    name VARCHAR NULL,    description TEXT NULL,    PRIMARY KEY(id, projects_id, projects_users_Username, users_Username),    INDEX context_FKIndex1(projects_id, projects_users_Username),    INDEX context_FKIndex2(users_Username),    FOREIGN KEY(projects_id, projects_users_Username)      REFERENCES projects(id, users_Username)        ON DELETE NO ACTION        ON UPDATE NO ACTION,    FOREIGN KEY(users_Username)      REFERENCES users(Username)        ON DELETE NO ACTION        ON UPDATE NO ACTION  )  TYPE=InnoDB;  

How to free up disk space? which logs/directories to clean? [ORACLE 11g]

Posted: 21 Jun 2013 11:34 AM PDT

I want to free up the disk space on my Linux machine. I've drill down the space usage and found that the following directories have a big size

/u01/app/11.2.0/grid/cv/log  /u01/app/11.2.0/grid/log/diag/tnslsnr/r1n1/listener_scan2/alert (Contains xml files)  /u01/app/11.2.0/grid/rdbms/audit(Contains .aud files)  /home/oracle/oradiag_oracle/diag/clients/user_oracle/host_XXXXXXXXXX/alert(Contains xml files)  /u01/app/oracle/diag/rdbms/crimesys/crimesys1/alert (Contains xml files)  

Can I delete contents from these directories? Note: I mean contents and not directories

Thanks!

No comments:

Post a Comment

Search This Blog