Tuesday, June 4, 2013

[how to] Defining a two-way link

[how to] Defining a two-way link


Defining a two-way link

Posted: 04 Jun 2013 05:28 PM PDT

I have a users table, and I want to define a "friends" relationship between two arbitrary users.

Up until now, I've used two different methods for this:

  1. The friends table contains user1 and user2. Searching for users involves a query that looks like
    ... WHERE @userid IN (`user1`,`user2`), which is not terribly efficient
  2. The friends table contains from and to fields. Initiating a friend request creates a row in that direction, and if it accepted then a second row is inserted with the opposite direction. There is additionally a status column that indicates that this has happened, making the search something like:
    ... WHERE `user1`=@userid AND `status`=1

I'm not particularly satisfied with either of these solutions. The first one feels messy with that IN usage, and the second seems bloated having two rows to define a single link.

So that's why I'm here. What would you suggest for such a link? Note that I don't need any more information saved with it, I just need two user IDs associated with each other, and preferably some kind of status like ENUM('pending','accepted','blocked'), but that's optional depending on what the best design for this is.

Is it worth to separate columns into multiple tables for one-to-one relational table

Posted: 04 Jun 2013 05:14 PM PDT

I need to make a decision for database structure on whether to separate one-to-one relational columns into multiple tables and link with one relationship id or just add all columns into one table.

The number of columns would be around 45 and I need to sort data on different columns on different query (one sort per query).

I will be using MyISAM storage engine.

Furthermore, there will be millions of data in the table(s).

how to rebuild / reinstall ssrs (reportserver, reportservertempdb) databases?

Posted: 04 Jun 2013 04:30 PM PDT

Our server crashed. We got it back up and running however, the mentioned databases have been corrupted.

Is there a programmatic / automatic way of rebuilding or reinstalling the SSRS databases?

If not:

  1. Since we can still select * from the databases, how do I grab the permissions and roles on different reports?
  2. How do we retrieve subscription information?

"Row not found at subscriber" with a row filter

Posted: 04 Jun 2013 03:53 PM PDT

I had a production issue today where delivery of a handful of update statements failed at the subscriber with "row not found". What's odd about it is that I have a horizontal filter set up on the article in question such that the rows in question shouldn't have been at the subscriber. What's especially odd is that there were many other rows within the same transaction that also qualified for exclusion via the filter that didn't trigger the same error. I got past it by setting the distribution agent to ignore errors. Does anyone have any idea what happened and how I can keep it from happening in the future?

Select unique value whereas the time is highest in the most optimal way

Posted: 04 Jun 2013 03:24 PM PDT

Given a simple, with a text and time field, I want to select X unique values from the text field, whereas that row contains the highest value for time.

INSERT INTO `test` (`id`, `text`, `time`) VALUES  (1, 'test1', 1),(2, 'test1', 3),(3, 'test1', 2),(4, 'test2', 1),(5, 'test2', 100),(6, 'test2', 20)  

The query that meets most of my requirements is:

SELECT a.* FROM      test a  INNER JOIN (      SELECT `text`, MAX(`time`) AS `time`      FROM          test      WHERE          `text` LIKE "te%"      GROUP BY          `text`) b  ON      a.`text` = b.`text` AND      a.`time` = b.`time`  ORDER BY      a.`text` ASC  LIMIT 0,50  

For small tables, this works perfect. Thou in my table (300k+ rows) it makes mysql crash, due to the subquery.

Is it possible to optimize this query? If it cannot be optimized, would it be possible to select the last inserted unique values for text? (the id and time are theoretically uncorrelated, though in 99% of the cases a correlation will be found, whereas the higher the id, the higher the time)

Thank you

How to restrict row explosion in join? - Distinct or union?

Posted: 04 Jun 2013 02:10 PM PDT

Here are the cardinalities of my tables:

24    in     (T0)  24    in     (T1)  24    in     (T2)  576   in join(T0,T1)  576   in join(T1,T2)  13824 in join(T0,T1,T2)  

I am joining in cross-product, which as you can see outputs |T_0|*...*|T_n-1| rows of data.

The join is in a WHERE clause, and is precluded by some limits and offsets values which limit the size of the output to 24.

There are no constraints on the tables, as this is an OLAP data-set. I can however impose some uniqueness constraints. Each table is joined on the same attribute.

How do I reduce the size of the output to only show the distinct results, rather than the full cross-product?

How to run a SELECT query within while loop in PHP?

Posted: 04 Jun 2013 03:47 PM PDT

Within a SELECT query via PHP while loop, I need to run a mysql SELECT query as

$result1 = $mysqli->query("SELECT * FROM table1");    while ( $row = $result->fetch_assoc() ) {    if ( $row['X'] == 'X' ) {  $result2 = $mysqli->query("SELECT * FROM table2");  .....  }    }  

but this does not work. I cannot JOIN the tables, as the if statement is complicated to perform SELECT from different tables.

If I update a column record in a table, will indexes that do NOT have this column in it be affected?

Posted: 04 Jun 2013 02:16 PM PDT

In terms of performance if I have a table like so:

CREATE TABLE [TESTDATA].[TableA](      [Col1] [nchar](5) NOT NULL,      [Col2] [nchar](2) NULL,      [Col3] [float] NULL  CONSTRAINT [TableA_PK] PRIMARY KEY CLUSTERED   (      [Col1] ASC  )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]  ) ON [PRIMARY]  

And then create a non-clustered index like this:

CREATE NONCLUSTERED INDEX [_idx_TableA]  ON [TESTDATA].[TableA] ([Col2])  WITH (SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF) ON [PRIMARY]  GO  

If I perform an update to the table, only changing Col3 does the DB need to touch the index _idx_TableA?

Just curious how far performance touches all indexes?

Using wm_concat to concatenate rows, but in an order determined by another column

Posted: 04 Jun 2013 01:10 PM PDT

Let's say I have 3 columns: p_id, description, order_by. I am trying to do the following:

I would like to concatenate the description for all like p_id values. So we are talking a group by p_id sort of thing. But then, I want the description to be concatenated in the order of the order_by column (which is an integer). So my ideal query (not-working) would look like

select p_id, wm_concat(description)  from my_table  where p_id = 12345  group by p_id  order by order_by asc  

How do I concatenate rows in this fashion?

Firebird database performance after server upgrade/restart

Posted: 04 Jun 2013 12:02 PM PDT

Got a 350 GB database (more than 40M records plus 0 - 1000 BLOBs for each record in another table). After upgrading Firebird to version 2.1.5 (mainly because of filesystem cache issue) database became terribly slow, both insertion and fetching. How to restore performance? I tried running some queries to force caching, it was somewhat helpful, and currently left it with gbak running, but what's the"proper" way?

SQL Agent embedded PowerShell script in CmdExec step fails with import-module sqlps

Posted: 04 Jun 2013 12:14 PM PDT

SQL Server 2008R2 PowerShell 2.1

I am trying to create a SQL Agent job that dynamically backs up all non-corrupted SSAS databases on an instance without the use of SSIS. In my SQL Agent job, when I create a CmdExec step and point to a PowerShell script file (.ps1) like this:

powershell.exe "c:\MyPSFile.ps1"   

the job executes successfully (or at least gets far enough to only encounter logic or other syntax issues).

This approach won't work for a final solution, because there is a requirement to keep the PowerShell script internal to SQL. So I have a different CmdExec step that embeds the PowerShell script like so:

powershell.exe "import-module sqlps –DisableNameChecking    $server_name = "localhost"  $backup_location = "C:\BackupsGoHere"    [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.AnalysisServices") | out-null  $server = New-Object Microsoft.AnalysisServices.Server  $server.connect($server_name)    # Generate an array of databases to be backed up  foreach ($database in ($server.get_Databases() | Where {$_.EstimatedSize -gt 0 -and $_.Cubes.Count -gt 0})) {      $directory_path = $backup_location + "\" + $database.Name      if (!(Test-Path -Path $directory_path)) {          New-Item $directory_path -type directory | out-null      }      [string] $timestamp = date      $timestamp = $timestamp.Replace(':','').Replace('/','-').Replace(' ','-')      $database.Backup("$directory_path\$database-$timestamp.abf")  }  $server.disconnect()"  

However, when executed with the embedded script, the job errors out quickly with the following response:

The specified module 'sqlps' was not loaded because no valid module file was found in any module directory.

Why can't I reference the module from an embedded script, but doing so in a ps1 file works just fine?

Database replication using wamp?

Posted: 04 Jun 2013 02:05 PM PDT

I have created a POS system for our corporation, in the HQ we have a wamp server with the main database, we also have more than 25 branches across the country.

I will setup a wamp server on each branch, so I can acces its database directly when putting wamp online.

I want to make MySQL replication with all branches, so every query on any branch will affect the main database on HQ.

I tried to test but found no one explain how to do it using wamp on different PCs.

Can I add a unique constraint that ignores existing violations?

Posted: 04 Jun 2013 06:45 PM PDT

I have a table which currently has duplicate values in a column.

I cannot remove these erroneous duplicates but I would like to prevent additional non-unique values from being added.

Can I create a UNIQUE that doesn't check for existing compliance?

I have tried using NOCHECK but was unsuccessful.

In this case I have a table which ties licensing information to "CompanyName"

EDIT: Having multiple rows with the same "CompanyName" is bad data, but we can't remove or update those duplicates at this time. One approach is to have the INSERTs use a stored procedure which will fail for duplicates... If it was possible to have SQL check the uniqueness on its own, that would be preferable.

This data is queried by company name. For the few existing duplicates this will mean that multiple rows are returned and displayed... While this is wrong, it's acceptable in our use case. The goal is to prevent it in the future. It seems to me from the comments that I have to do this logic in the stored procedures.

Postgresql constrains on FK

Posted: 04 Jun 2013 12:23 PM PDT

I am trying to design a (part of a) database which has to accomplish the following:

  • There is a students table, containing a bunch of students.
  • There are educations in the database.
  • Each student can have 0..n educations.
  • There are x different types of educations, in which x is small (<10) and known in advance.
  • The different types of education all have the same type of data associated with it (location, name etc.)
  • Some more data is associated with a student-education connection (e.g. grades, start date etc.).
  • The type of data which is associated with a student-education connection depends on the type of education (e.g. a masters degree has a specialization, a course does not).

I try to create a good database design to represent this data, however there are quite a few difficulties. A design I came up with is as followed:

  • Student table, which contains student data
  • Education table, which contains data of educations. There is a type column to specify the type.
  • Student_Education junction table, which links students with educations.
  • x <type>Education tables which will contain data associated with student-education connections (thus a FK to Student_Education is always present). Each education type will have it's own table.

However, there is a problem with this design: a <type>Education row should only be allowed to reference a Student_Education connection when the education type matches. E.g. a MasterEducation row can only reference a row in the Student_Education table that references a row in the Education table with type == master.

Would it be possible to add a constraint which can check exactly that?

If not, what other options are available?

How can I reset a mysql table auto-increment to 1 in phpMyAdmin?

Posted: 04 Jun 2013 01:01 PM PDT

I know that in MySQL at the command line I can reset a table's auto-increment field to 1 with this:

ALTER TABLE tablename AUTO_INCREMENT = 1  

I am curious if there is a way to do this from within phpMyAdmin. Something like a check box to reset the auto-increment or something else along those lines?

Not that there is anything wrong with the command line approach. More one of those curiosity things I keep thinking on... Thanks in advance!

Convert Oracle database to Derby

Posted: 04 Jun 2013 08:10 PM PDT

I need to migrate an existing Oracle Database into a Derby one. I want to know if there's a tool, a script or another way to do that work.

It is using any of the interesting features of Oracle, as I can see from the database information from SQL Developer, except sequences and indexes.

Thanks!

Index on foreign key makes query extremely slow

Posted: 04 Jun 2013 01:40 PM PDT

We are recently experiencing a tremendous query slowdown with spilled over temp tablespace. A specific query causes this problem.

The queried table (table3) has an indexed PK, three FK with indexes and a compound unique constraint on the three FKs. The offensive query looks like this:

SELECT ...    FROM table1 t1, table2 t2, table3 t3    WHERE t1.abs_id = ?      AND t3.vgs_id = t1.vgs_id      AND t3.ai_id > ?      AND t2.id = t1.t2_id      AND t2.status = 2      AND t2.felddimension = 0      ...  

Only instance restart solved the issue. Even killing connections did no help.

After futher investigation on the FKs and the indexes, it turned out that the index on the t3.ai_id column causes the severe drop in performance. After disabling this one the unique constaint served the query extremely fast.

The problematic part is AND t3.ai_id > ? (range scan). Unique scan does not cause any trouble.

Now the question is, how can an index cause such a slowdown and moreover, how can I investigate the cause? It simply doesn't add up for me.

Competitive times: normal 10 s, slowdown > 2 min or never returning.

How should I best handle a rapidly growing database?

Posted: 04 Jun 2013 03:31 PM PDT

I have a database that I need to maintain.

Sadly, the setup and use of that database I can't change, much (thanks to some internal politics).

It's running on SQL Server 2008r2.

Its only been live for 5 days and has grown from 20GB to upwards of 120GB in that time. (essentially most of the data gets deleted and then imported, but like I say I can't control that side of things)

I would love to run nightly jobs to shrink the database and reorganise the indexes, but I know that's a long way from best practices and could lead to more problems than I've already got!

QUESTIONS

  • What's the best way to handle a database that's rapidly increasing in size?
  • Should I be looking at moving the file group around to keep the physical size on disk down?
  • Is there any way to stop the server running out of space within a month?

Why would increase in innodb_buffer_pool_size slow down MySQL?

Posted: 04 Jun 2013 01:27 PM PDT

5.1.68-cll - MySQL Community Server on CentOS

The system has 32GB of RAM.

I increased innodb_buffer_pool_size from 10240M to 15360M (10GB -> 15GB).

Time taken for a series of identical operations increased from 720 to 822 seconds (14% increase).

This was the result only a single test at each setting. But 4 previous tests performed a few months ago resulted in times between 726 and 740s.

I just tried running it again with 8GB, and the time taken was 719s.

Why would more memory result in a slower process?

EDIT: More details on process

The process that I'm testing involves emptying some tables and rebuilding them from data from existing tables. I'm not sure if it's using SELECT INSERT or if it's SELECTing the data, then using PHP to create long INSERT statements. If that matters then I can find out.

There are no schema definition changes being made.

Here is the output of numactl --hardware while the server is relatively idle:

root@server [~]# numactl --hardware  available: 1 nodes (0)  node 0 cpus: 0 1 2 3 4 5 6 7  node 0 size: 32740 MB  node 0 free: 6216 MB  node distances:  node   0    0:  10  

And free -m

root@server [~]# free -m               total       used       free     shared    buffers     cached  Mem:         32081      25864       6216          0       2591      12791  -/+ buffers/cache:      10482      21599  Swap:        15994         16      15977  

Edit by RolandoMySQLDBA

Please run this query

SELECT      InnoDBSpace / POWER(1024,1) InnoDB_KB,      InnoDBSpace / POWER(1024,2) InnoDB_MB,      InnoDBSpace / POWER(1024,3) InnoDB_GB  FROM  (      SELECT SUM(data_length+index_length) InnoDBSpace      FROM information_schema.tables      WHERE ENGINE='InnoDB'  ) A;  

RESULT:

InnoDB_KB InnoDB_MB InnoDB_GB  8413536 8216.34375 8.02377319335938  

and this one

SHOW GLOBAL STATUS LIKE 'innodb_buffer_pool_pages%';  

RESULT:

Innodb_buffer_pool_pages_data  410035  Innodb_buffer_pool_pages_dirty  204  Innodb_buffer_pool_pages_flushed  826954  Innodb_buffer_pool_pages_free  99231  Innodb_buffer_pool_pages_misc  15022  Innodb_buffer_pool_pages_total  524288  

How to import a text file with '|' delimited data to PostgreSQL database?

Posted: 04 Jun 2013 09:10 PM PDT

I have a text file with | delimited data that I want to import to a table in PostgreSQL database. PgAdminIII only exports CSV files. I converted the file to a CSV file using MS excel but still was unsuccessful importing data to PostgreSQL database.

It says an error has occurred: Extradata after last expected column. CONTEXT: COPY , line1:

What I am doing wrong here?

Column header:
KNUMBER,APPLICANT,CONTACT,STREET1,STREET2,CITY,STATE,ZIP,DEVICENAME,DATERECEIVED,DECISIONDATE,DECISION,REVIEWADVISECOMM,PRODUCTCODE,STATEORSUMM,CLASSADVISECOMM,SSPINDICATOR,TYPE,THIRDPARTY,EXPEDITEDREVIEW,,
Sample data:
K000001,BOSTON SCIENTIFIC SCIMED/ INC.,RON BENNETT,5905 NATHAN LN.,,MINNEAPOLIS,MN,55442,WALLGRAFT TRACHEOBRONCHIAL ENDOPROSTHESIS AND UNISTEP DELIVERY SYSTEM,1/3/00,6/5/00,SE,SU,JCT,Summary,SU,,Traditional,N,N,, K000002,USA INSTRUMENTS/ INC.,RONY THOMAS,1515 DANNER DR.,,AURORA,OH,44202,MAGNA 5000 PHASED ARRAY CTL SPINE COIL,1/3/00,2/23/00,SE,RA,MOS,Summary,RA,,Traditional,N,N,,

Table Schema: CREATE TABLE medicaldevice1 ( medical_device_id serial NOT NULL, k_number character varying(8), applicant character varying(150) NOT NULL, contact character varying(50), street1 character varying(80), street2 character varying(40), city character varying(50), state character varying(8), zip character varying(16), device_name character varying(500) NOT NULL, date_received character varying(8), decision_date character varying(8), decision character varying(2), review_advise_comm character varying(2), product_code character varying(3), state_or_summary character varying(16), class_advise_comm character varying(2), ssp_indicator character varying(25), third_party character varying(2), expedited_review character varying(4), CONSTRAINT medical_device_id_pk PRIMARY KEY (medical_device_id) )

How to recover/restore corrupted Innodb data files?

Posted: 04 Jun 2013 02:38 PM PDT

A while ago, my Windows 7 system on which a MySQL Server 5.5.31 was running crashed and corrupted the InnoDB database. The weekly backup that's available does not cover all the tables that were created in the meantime, therefore I would endeavor to recover as much as possible from the data. Right after the crash, I copied the whole data folder of MySQL to an external drive. I would like use this as the starting point for my rescue attempts.

In the following I'll describe the steps of my (not yet convincing) rescue attempt so and would be thankful for any comments or guidance on how to improve it:

  1. I've now done a fresh install of MySQL Server 5.5.31 on another PC
  2. I stop the MySQL service with "net stop MySQL" at the command prompt.
  3. I already figured that I need to adjust the size of the innodb log file in the my.ini file as it deviates (256 MB) from the default value (19MB).
  4. In the my.ini, I also set innodb_force_recovery=6
  5. In the data folder of the fresh installation, I overwrite the ibdata1, iblogfile0, iblogfile1 files with those recovered from the crashed machine. I also copy the relevant database (UPDATE: and the mysql) folders into here (NOT the standard mysql, test and performance folders).
  6. I start the MySQL service with "net start MySQL".
  7. I go into MySQL Workbench, open my server instance, go to Data Export, basically leave the default settings, and have every table of my databases exported as an individual dump file. I also set stored procedures to be dumped. Otherwise I do not change the default settings there.
  8. I start the dump process; it makes its way through 43 out of 195 tables. Of these 43,
    • some cannot be recovered yielding an error "mysqldump: Got error: 1146: Table '...whatever...' doesn't exist when doing LOCK TABLES",
    • but many can. I assume that when the dump does not yield any error, the table's data is non-corrupted.
      Then, after the 44th, all the other table dumps fail as it is reported that the server cannot be connected to anymore:
      "mysqldump: Got error: 2003: Can't connect to MySQL server on 'localhost' (10061) when trying to connect
      Operation failed with exitcode 2
      "
      These errors then go on for all the remaining tables from the 44th to the 195th.
      For the 44th table itself, the error is the following: "mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table ...table 44... at row: 57". So it seems that for this table corruption is or begins at row 57.

Now to my questions:

  • Why is the connection breaking down given that innodb_force_recovery is set to 6?
  • How to proceed? I worked out what the 44th table was where the connection was lost and can try to resume the process from the 45th table. But isn't there a better way to do it?
  • Once the data has been copied and the server restarts well, should I just try a dump of each table or what alternatives are there?

Thanks.


UPDATE: Additional notes for my later reference
- When re-creating stored routines backed up using SHOW CREATE PROCEDURE ... and SHOW CREATE FUNCTION ..., they must be imported using DELIMITER // (create procedure code of procedure 1)// (create procedure code of procedure 2)// DELIMITER ;

Mysql reliable with 1000 new entries / minute?

Posted: 04 Jun 2013 06:47 PM PDT

I have been developing an application that in the WORST case writes 1000 entries each minute into a database for over a year...

I wanted to use Mysql as DB, but I have read that with high datatransfers it becomes unreliable when writing. Is this true? Is 1000 entries considered a high amount of data? What would be such a high amount of data? Would corrupt data mean that I miss one entry or that I lose the whole table?

Thanks

TokuDB not much faster than MySQL

Posted: 04 Jun 2013 07:55 PM PDT

I have converted a MySQL database with 80.000.000 rows to TokuDB.

Now when I run:

 select count(id) from xxx where active=1  

it takes 90% of the time of the normal MySQL request.

What do I have to further optimize so that it runs faster?


The table definition:

CREATE TABLE `adsDelivered` (    `id` bigint(20) NOT NULL AUTO_INCREMENT,    `uid` varchar(40) NOT NULL,    `_adsDelivered` bigint(20) NOT NULL DEFAULT '0',    `_campaign` bigint(20) NOT NULL DEFAULT '0',    `_ad` bigint(20) NOT NULL DEFAULT '0',    `session` varchar(44) NOT NULL,    `referer` text NOT NULL,    `refererDomain` varchar(256) NOT NULL,    `pageTime` int(11) NOT NULL DEFAULT '0',    `pageVisibleTime` int(11) NOT NULL DEFAULT '0',    `browser` varchar(256) NOT NULL,    `ip` varchar(15) NOT NULL,    `clicks` int(11) NOT NULL DEFAULT '0',    `clickTimeLast` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',    `tag` varchar(256) NOT NULL,    `countryShort` varchar(2) NOT NULL,    `timeCreated` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,    `timeUpdated` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',      PRIMARY KEY (`id`),    UNIQUE KEY `uid` (`uid`),    KEY `_campaign` (`_campaign`),    KEY `_ad` (`_ad`),    KEY `_adsDelivered` (`_adsDelivered`),    KEY `session` (`session`),    KEY `tag` (`tag`),    KEY `ip` (`ip`),    KEY `countryShort` (`countryShort`),    KEY `refererDomain` (`refererDomain`)  ) ENGINE=TokuDB AUTO_INCREMENT=7420143 DEFAULT CHARSET=utf8;  

How database administrators can see my requests to SQL Server?

Posted: 04 Jun 2013 06:36 PM PDT

I'm a SQL Server 2008 user. I have access to some tables. I need to request few columns from table as I usually do. But I need to do it once (for example) in 5 seconds and system administrators shouldn't see (feel:) my activity.

Result of request - table with approximately 100 lines. My query contains only select and where clause by index. (it is light and it is executing very fast)

As I know, SELECT operations don't write to transaction log. I mean, if I only read database, where is log of my select actions SQL Server keep? Can administrator see my select queries?

C2 audit, as I can see in properties, is disabled.

Is there any other ways to see my activity?

Thanks.

How do I check if a constraint exists on Firebird?

Posted: 04 Jun 2013 09:19 PM PDT

I'm about to publish a script which will update a lot of Firebird databases all at once. Some will not have this constraint, so I would like to check for the existence of a constraint before I try to drop it.

ALTER TABLE PROCESS_CATEGORY DROP CONSTRAINT INTEG_669;  

ORA-16000 when trying to perform select on read only access ORACLE database

Posted: 04 Jun 2013 01:23 PM PDT

My application's SQL encounters ORA-16000 when trying to access read only Oracle Database

ORA-16000: database open for read-only access ORA-06512: at "SYS.XMLTYPE",   line 272 ORA-06512: at line 1 ### The error may exist in com/xyz/report/dao/Transaction001Mapper.xml  

This is the query that involves the XMLTYPE, the INTERFACE_CONTENT is a CLOB COLUMN :

SELECT CONTENTS FROM ERRCODES WHERE          CODE=(SELECT xmltype(INTERFACE_CONTENT).extract('/Message/PaymentStatus/ps:FIToFIPmtStsRpt/ps:TxInfAndSts/ps:StsRsnInf/ps:Rsn/ps:Prtry/text()','xmlns="urn:abcde" xmlns:head="urn:iso:std:iso:20022:tech:xsd:head.001.001.01" xmlns:ps="urn:iso:std:iso:20022:tech:xsd:pacs.002.001.03"').getstringval() APP_CODE  FROM MESSAGE_EXTERNAL_INTERACTION MEI WHERE MEI.MSG_TYPE='Pacs_002'      AND MEI.MID='MY_ID')  

I also did A lot OF EXTRACTVALUE( ) method on an XML FIELD TYPE.

The SQL is working perfectly if the Database is not read only ( read write ).

My Question here is what is the issue here - Is this related to some missing priviledges/grant ?

How to run a cold backup with Linux/tar without shutting down MySQL slave?

Posted: 04 Jun 2013 03:23 PM PDT

I run the following before tar-ing up the data directory:

STOP SLAVE;  FLUSH TABLES WITH READ LOCK;  FLUSH LOGS;   

However, tar will sometimes complain that the ibdata* and ib_logfiles* files are updated during the process. What am I missing?

The slave machine is in a cold standby machine so there are no client processes running while tar is running.

CentOS release 5.6 64bits, MySQL 5.1.49-log source distribution.

SQL to read XML from file into PostgreSQL database

Posted: 04 Jun 2013 07:45 PM PDT

How can I write SQL to read an XML file into a PostgreSQL XML value?

PostgreSQL has a native XML data type with the XMLPARSE function to parse a text string to that type. It also has ways to read data from the filesystem; the COPY statement, among others.

But I don't see a way to write native PostgreSQL SQL statements to read the content from a filesystem entry and use that to populate an XML value. How can I do this?

No comments:

Post a Comment

Search This Blog