Monday, April 8, 2013

[how to] Attached image on form with query

[how to] Attached image on form with query


Attached image on form with query

Posted: 08 Apr 2013 08:44 PM PDT

I have a table called Products with codProduto, ProductName, Price and Image (in this case an attachment). I created a form of budget details, which uses codProduto to enter multiple products in the Budget. I did the Budget Details form based on a query consDetOrçamento because I'll need a field called Total, which calculates Quant * Preço. I want this query returns to Image (attached), and it returns. But when I create a new record and choose a product gives an error message. I know it's because of the attachment in the query. Question: How do I view each product (attached) working in this way?

I already tried

Forms! Budgets! Attachment = DLookup ("Image", "Products", "ProductID =" & Me.CodProduto)

but without success.

Multiple principals in a single kerberos keytab file?

Posted: 08 Apr 2013 07:55 PM PDT

I have a situation where I have multiple keytab files (different principal accounts) and my application is going to use these different service principal accounts and connect to one or more Oracle databases (all kerberos enabled). Can I maintain only one keytab (merging all into one)in my application environment? If I merge all keytabs into one using kutil and issue kinit (or okinit) using keytab and service principal, I could see the command runs successful and see the cache credentials getting updated. But I am not sure if the single cache file is actually storing tickets for all the principals. When I issue klist (or oklist), I could only see the last issued service principal's ticket.

Do we ever put more than one principal in a single keytab file and maintain it in an application env? If not, why there is an option to merge keytab files? only to be used in kdc may be? The reason why I want to maintain one keytab is, my applications rely on Oracle OCI thick driver (sqlnet.ora) and I cant maintain multiple keytab files and multiple sqlnet.ora, as sqlnet.ora cannot be switched or changed in runtime.

I know I am missing something here, perhaps a flaw in my application design using more than one service account at first place?

Please give me some directions, I dont find the right forum where I get my queries answered. Thanks in advance.

-Srivatsan

Delete entry from related tables

Posted: 08 Apr 2013 06:21 PM PDT

I can not use InnoDB engine because of some dependency troubles in LuaSQL and PtokaX.

My current query is something like:

DELETE e.*, m.*   FROM `entries` e   INNER JOIN `magnets` m       ON m.`eid` = e.`id`   WHERE e.`id` = %d  

where, %d is an integer passed by the user. This query works if there is atleast one entry in the magnets table related to the entries table.

But, it fails when the entries has no dependent row in magnets table. Can I modify this query to perform deletion from both tables, independent of child-rows in magnets? Or do I need to first check if a child exists and delete accordingly?

POSTGRES DB Server Blunder

Posted: 08 Apr 2013 08:01 PM PDT

This weekend I had to rebuild my laptop which had Postgres running. For some reason, I forgot to dump the data but I did make a complete copy of the original hard drive.

Is it possible to recover data from the original data files to transfer to another instance of Postgres?

I will very much appreciate your help.

Ray

Restoring a database back to a point in time before full backup was made

Posted: 08 Apr 2013 04:00 PM PDT

A problem with one of our database tables arose at the end of last week and I wanted to restore that database to a server in order to retrieve the table. We only have the last backup, and a full set of transaction logs before the error occurred, up to and after the full back was taken.

However, when I restore, I can only use the transaction logs to roll forward, not back. Is what I'm trying to do possible?

Sequence of events:

  • Error occurred: Friday April 5 at 12:00
  • Full backup taken: Sunday April 7 at 02:00

Many thanks.

I want to create a mirror in SQL Server 2008, but the database is in the status "restoring ..."

Posted: 08 Apr 2013 07:55 PM PDT

I'm trying to create a mirror in SQL Server 2008, and did the following:

On the principal server

USE master;  GO  ALTER DATABASE Test  SET RECOVERY FULL;  GO;    BACKUP DATABASE Test  TO DISK = N'D:\Test.Bak' WITH FORMAT  GO  RESTORE VERIFYONLY  FROM DISK = N'D:\Test.Bak'  GO    BACKUP LOG Banco  TO DISK = N'D:\TestLog.Bak' WITH FORMAT  GO  RESTORE VERIFYONLY  FROM DISK = N'D:\TestLog.Bak'  GO  

On the mirror server:

USE master;  GO;  RESTORE DATABASE Teste  FROM DISK = N'D:\Test.bak'  WITH NORECOVERY,  MOVE 'Test_Data' TO  N'D:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\Test_Data.mdf',  MOVE 'Test_Log'  TO N'D:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\Test_Data.ldf'  GO    USE master  GO  RESTORE LOG Test  FROM DISK = N'D:\TestLog.bak'  WITH NORECOVERY  GO  

I can not select, users can not change, change nothing!

it happens

how do I fix this?

Update one table from another table while sorting that table based on one column

Posted: 08 Apr 2013 02:13 PM PDT

This is the problem I'm trying to figure out in MySQL. We have an old table contains some forms submitted by our users. Somehow, the previous decision was each time a user comes to this survey, a new form will be submitted. So in the old table we easily have several rows with the same Firstname, Lastname, but different values in the other columns, and there's a timestamp column Date_Submission as well.

Now we are trying to move everything to a new table, but this time, for each person we only keep one row. And we want to keep some of the the latest old data of that user (like email, phone number, etc)

I could do the following:

update newtable, oldtable set newtable.email = oldtable.email where newtable.firstname = oldtable.firstname and newtable.lastname = oldtable.lastname;

Apparently this won't give me the "latest" old date for each person.

So I tried this one:

update newtable, oldtable set newtable.email = oldtable.email where newtable.firstname = oldtable.firstname and newtable.lastname = oldtable.lastname order by oldtable.Date_Submission;

But they MySQL will complain with:

"ERROR 1221 (HY000): Incorrect usage of UPDATE and ORDER BY".

So I'm wondering, what's the correct way to achieve this?

How can I alter the fill factor of an index outside of an ALTER INDEX command?

Posted: 08 Apr 2013 02:49 PM PDT

I am using a COTS package to perform index rebuilds across my SQL Server portfolio. The package unfortunately doesn't have an option to set index fill factors globally for an individual instance or DB (I can set them when scheduling rebuilds on an index-by-index basis, but with 1,600+ indexes, I'd like to set them in advance in bulk). Outside of an ALTER INDEX command, is there a way I can redefine existing index fill factors?

increasing mysql table open cache?

Posted: 08 Apr 2013 01:16 PM PDT

I often read that it is best to increase this variable slowly. Can someone explain why? My status indicates that I should increase it... What is best practice / "slowly"?

Thanks!

Execution of a java program by a trigger

Posted: 08 Apr 2013 01:18 PM PDT

Can an update trigger execute a java program?

I have a table T1 with a column named Flag. Whenever Flag changes, I want to run a trigger that results in an execution of a java program.

If MySQL isn't caching, then what is?

Posted: 08 Apr 2013 12:49 PM PDT

I ran a few commands to see whether MySQL is caching queries/result sets. It looks like this is not the case.

mysql> show variables like 'query_cache_size';  --------------  show variables like 'query_cache_size'  --------------    +------------------+-------+  | Variable_name    | Value |  +------------------+-------+  | query_cache_size | 0     |  +------------------+-------+  1 row in set (0.01 sec)      mysql> show status like 'Qcache%';  --------------  show status like 'Qcache%'  --------------    +-------------------------+-------+  | Variable_name           | Value |  +-------------------------+-------+  | Qcache_free_blocks      | 0     |  | Qcache_free_memory      | 0     |  | Qcache_hits             | 0     |  | Qcache_inserts          | 0     |  | Qcache_lowmem_prunes    | 0     |  | Qcache_not_cached       | 0     |  | Qcache_queries_in_cache | 0     |  | Qcache_total_blocks     | 0     |  +-------------------------+-------+  8 rows in set (0.00 sec)  

At the same time, I can run the same query at one given time, and it will take 42 seconds, and at another given time, and it will take 1.10 seconds.

If the MySQL server is not doing the caching, what is? Is it the mysql client? Is it my operating system (OS X)? Is it some service on the server that's not MySQL? Are all of these possibilities, or only one?

Should I disable triggers, timestamps in a slave in mysql?

Posted: 08 Apr 2013 08:41 PM PDT

We're using MySQL. We have a master that eventually will have 2 slaves. There are triggers in the db that execute when data changes in certain tables. I am wondering whether to disable the triggers in the slaves. It seems to me that I should. It also seems like things like timestamps should be disabled in the slaves because otherwise the data will be different on the slaves than the masters.

I'm not a DBA, I'm a developer. We don't have a DBA at my company so the ops admin and I are figuring it out as we go along. We already have the master and one slave set up and replicating and we are getting duplicate entry error on the slave.

We were thinking of not halting replication for duplicate replication errors, as in this post: http://www.ducea.com/2008/02/13/mysql-skip-duplicate-replication-errors/. We don't know whether this is a good idea. I do feel that this masks the problem, but neither of us knows how to fix the problem.

If we should disable the triggers and/or set the slave not to insert timestamps, how do we go about doing that? Or does establishing a db as a slave automagically do these things anyway?

MySQL PDO Cannot assign requested address

Posted: 08 Apr 2013 12:40 PM PDT

Can someone help me with this error?

[08-Apr-2013 17:44:08 Europe/Berlin] PHP Warning:  PDO::__construct(): [2002]      Cannot assign requested address (trying to connect via tcp://****:3306) in       /var/www/***  [08-Apr-2013 17:44:08 Europe/Berlin] PHP Fatal error:  Uncaught exception       'PDOException' with message 'SQLSTATE[HY000] [2002] Cannot assign requested       address' in /var/www/***  

I have a Server with a lot connections per second; out of about 100 Connections, a single one got this error.

I've tried this recommendation from stackoverflow however it does not solve my problem.

Odd syntax error when creating view with SCHEMABINDING [closed]

Posted: 08 Apr 2013 08:52 AM PDT

I've been working with SQL Views for a while now, and I recently read that something that I want to do requires that one of my views be created with schemabinding. That sounds fine, but I get a syntax error when I try to follow along with an example:

CREATE VIEW dbo.MyView WTIH SCHEMABINDING AS  SELECT t1.a, t2.b      FROM dbo.Table1 t1      INNER JOIN dbo.Table2 t2 ON t1.x = t2.x;  

I get the following error in SQL Server Management Studio 2012:

Msg 102, Level 15, State 1, Procedure DHRA_Appointment, Line 2  Incorrect syntax near 'WTIH'.  

This looks like some simple error in my SQL. What am I doing wrong?

SQL Server BULK INSERT

Posted: 08 Apr 2013 02:27 PM PDT

I have a file saved using BCP and I want to know the specification of the table of the database that it needs to be materialized on the database. Is that possible?

Thanks

Incremental backup in postgresql

Posted: 08 Apr 2013 04:59 PM PDT

I want to migrate a production database from one linux server to other. I am not concern about upgrading my database version, just I want to move the database from one server to other with minimum down time.

The version of my database is 8.3. Its of size 160 GB.

It takes 12 Hours to take the dump and 8 hours to restore it.

Over here the downtime is of 20 Hours, which I cannot afford. I can afford a down time of 4 hours at max.

So how will I refresh the delta changes that happened in 20 Hours into the production database with the database over the other server.

Please help.

Delete the N oldest entries grouped my multiple columns

Posted: 08 Apr 2013 12:29 PM PDT

I´d like to delete the oldest entries of my database, but in each category shall remain at least N words of each char.

Structure of TABLE words

id | word | char | category  

For example I have the following entries:

id | word | char | category  1     abc     a       1  2     abc     a       1  3     abc     a       1  4     bcs     b       1  5     bcs     b       1  6     bcs     b       1  7     csd     c       2  8     csd     c       2  9     asd     a       2  10    asc     a       2  

For N=2 the following entries should be removed as their id´s are the lowest:

1     abc     a       1  4     bcs     b       1  

Can somebody help? I tried various things (GROUP BY, HAVING, INNER AND OUTER JOINS) but it does not work.

select count(*) in mysql 5.5 innodb-- rewrite advice?

Posted: 08 Apr 2013 09:43 AM PDT

I need advice on how to rewrite a select count(*) query for innodb tables mysql 5.5. in new environment its very slow...

select count(*)   from mails3     join questions using (question_id)   where mails3.from_user_id = '86696'     and mails3.is_community_star = 1     and mails3.rating = 3     and questions.is_rated_community = 1;  

the query execution plan looks simple enough but very slow

*************************** 1. row ***************************             id: 1    select_type: SIMPLE          table: mails3           type: ref  possible_keys: question_id_index,emails_doublestars,favorite_emails,user_responses,owner_responses,sentmail,watchlist,read_responses,rating_date,read_and_unrated            key: emails_doublestars        key_len: 8            ref: const,const,const           rows: 2784          Extra: Using where  *************************** 2. row ***************************             id: 1    select_type: SIMPLE          table: questions           type: eq_ref  possible_keys: PRIMARY,is_rated_community_senddate_idx            key: PRIMARY        key_len: 5            ref: answerology.mails3.QUESTION_ID           rows: 1          Extra: Using where  2 rows in set (0.00 sec)  

Trying to create a Data Model

Posted: 08 Apr 2013 10:47 AM PDT

It may sound stupid but it's my first experience ever with databases.

It's a simple database for a car accessories store. I was hoping when making a sale on a certain item it effects the quantity of this item, not sure if this is possible with this design. Also as shown in the image an item could be for one car or more so I thought it should be many to many.

Diagram

Get Hierarchial Data in Self-Referencing Table (Parents , Childs) [closed]

Posted: 08 Apr 2013 01:47 PM PDT

In the following Fiddle,

sqlfiddle

I will pass a sno and I need all the parents of that sno in a table. And all the children of that sno in another table

Please refer to this question. Since both are related, they use the same table.

I need to display top 10 levels in a table.

Adding slave to existing master-master config in mysql throws Foreign key constraint failed error

Posted: 08 Apr 2013 09:08 AM PDT

We have two mysql servers running in master - master configuration.

Now we have to add a slave to the existing configuration.

But upon adding the third DB server and starting slave on it, it throws Cannot add or update a child row. foreign key constraint fails error.

We have tried taking a mysqldump from the master with --master-data and restored it. After that we started the slave on the slave DB. But even then, it says the same error. We have also tried XtraBackup. But that also throws the same error. Are we missing something?

EDIT 1

mysqldump command: mysqldump --master-data -uroot -p dbname > dbname.sql

We have also tried it this way: percona xtrabackup

In both cases, upon getting the foreign key error, we tried to dump and restore individual tables referenced by the foreign keys manually from the master to the slave. Upon doing this, the replication starts and seems to work normally with 0 seconds behind master for a few minutes, after which another foreign key error shows up, stopping the replication.

EDIT 2

auto_increment_increment is set to 2 in both masters. auto_increment_offset is set to 1 in Master 1 and 2 in Master 2. We have mostly innoDB tables and some MyISAM tables.

MySQL version 5.5.30.

EDIT 3

There is a follow up question.

Transform XPath map into XML document using relational data

Posted: 08 Apr 2013 09:57 AM PDT

Background

Most modern databases have XML functions that can be used to extract data in an XML format. I want to avoid the task of manually calling XML functions to extract the data.

This problem involves devising a generic solution to create XML documents based on mapping database tables (and JOIN conditions) to XPath expressions.

Problem

An XPath map codifies associations between an XML document and relational data as follows:

root               > people              -- "root" is a special keyword  person             > person              -- > is a node to table relation  person.first_name -> name/first          -- -> is a node/attribute to column relation  person.last_name  -> name/last  person.age        -> [@age]  account.person_id => person.person_id    -- => is a join  account            > person/account  account.number    -> [@id]  

Where a PERSON table might resemble:

person_id | first_name | last_name | age        123 | Peter      | Parker    | 18        456 | James      | Jameson   | 42  

Where an ACCOUNT table might resemble:

account_id | person_id | number           1 |       123 | 123456789  

Calling a function using the XPath map would produce the following XML document:

<people>    <person age="18">      <name>        <first>Peter</first>        <last>Parker</last>      </name>      <account id="123456789" />    </person>    <person age="42">      <name>        <first>James</first>        <last>Jameson</last>      </name>    </person>  </people>  

In this case, James Jameson does not have an account and so the corresponding XML element (account) is not included in the final document.

This is a difficult problem and a complete solution is not necessary. A solution that handles 80% of simple tables mapped to simple XML elements and attributes would suffice.

Question

What technologies, or open source implementations, already perform such a task?

Barring that, what algorithm would return an XML document based on such a generic XPath map? The algorithm must transform the structure defined by the XPath map into an XML document with the content from the relations defined in the XPath map.

Related Links

Links that are somewhat related to this idea.

Articles and White Papers

Articles:

Commercial Software

Similar solutions:

Concatenation Physical Operation: Does it guarantee order of execution?

Posted: 08 Apr 2013 09:55 AM PDT

In standard SQL, the result of a union all is not guaranteed to be in any order. So, something like:

select 'A' as c union all select 'B'  

Could return two rows in any order (although, in practice on any database I know of, 'A' will come before 'B').

In SQL Server, this turns into an execution plan using a "concatenation" physical operation.

I could easily imagine that the concatenation operation would scan its inputs, returning whatever input has records available. However, I found the following statement on the web (here):

The Query Processor will execute this plan in the order that the operators appear in the plan, the first is the top one and the last is the end one.

Question: Is this true in practice? Is this guaranteed to be true?

I haven't found any reference in Microsoft documentation that the inputs are scanned in order, from the first to the last. On the other hand, whenever I try running it, the results suggest that the inputs are, indeed, processed in order.

Is there a way to have the engine process more than one input at a time? My tests (using much more complicated expressions than constants) are on a parallel-enabled 8-core machine, and most queries do take advantage of the parallelism.

Search every column in every table in Sybase Database

Posted: 08 Apr 2013 08:01 PM PDT

I'm been taxed with the task of creating an application that pulls data from our Micros Point of Sales system. The POS is using a Sybase database running on one of our servers. The database schema is very convoluted. I've been able to figure out most of the schema to gain access to the data I need, however, there are a few things still left to find. I know what they are called in the actual POS, but I can't seem to find them anywhere in the database (although I haven't actually looked through all 200+ tables).

I'm wondering if there's any way to run a query to search for a specific string in all the columns in all the tables. BTW, I'm using the "Interactive SQL" application that comes with the Sybase Database Client software to connect to the database.

column precision exceeded. ORA-01438 is so unmeaningful

Posted: 08 Apr 2013 07:01 PM PDT

I am not sure if this is a question to StackExchange DB Admin group. please let me know if its not.

Let's suppose you have a Java batched prepared statement writing data to database. If one of the columns has a value larger than specified precision it throws ORA-01438 exception. Is there any way to find out which column is that?

Applying user-defined fields to arbitrary entities in a SQL Server database

Posted: 08 Apr 2013 09:01 PM PDT

Currently we have an old (rather crude) system that has user defined fields, which are mapped against rows in arbitrary tables. This was an after-the-fact modification based on a customer request, and it wasn't really designed to scale well. Our system has around 60 different types of entities, which makes things even more complicated. Essentially the implementation looks like this:

USER_DEFINED_FIELDS:

UDF_ID         int  ENTITY_TYPE    int  UDF_NAME       nvarchar(64)  UDF_DATA_TYPE  int  UDF_FORMAT     nvarchar(16)  UDF_MASK       nvarchar(24)  UDF_FLAGS      int  

UDF_VALUES_NUMBER:

UDF_ID         int  ENTITY_ID      int  VALUE          int  MODIFIED       datetime  

UDF_VALUES_TEXT:

UDF_ID         int  ENTITY_ID      int  VALUE          nvarchar(255)  MODIFIED       datetime  

etc...

This gets nice and fun when we generate our own ways to index compound primary keys, but that's another DailyWTF-worthy story.

Obviously this is pretty hideous, and leads to some spectacularly horrific queries being generated, but it's worked alright for now because we limit each entity to a maximum of 5 user defined fields. As a quick disclaimer, I wasn't with the company when this design decision was made! ;)

Anyway, we're about to start a shiny new project and will inevitably need a better way of doing this, with no restrictions on the number of UDFs we can apply to entities, increased performance, and less horror in the generated query department. If we could run a NoSQL solution like Mongo or Redis I'd be happy and wouldn't need to bother you all, but sadly that's not an option. Instead, I need a way to do this from within SQL Server 2008 R2.

So far, I've come up with the following options:

  • Individual UDF table for each entity type, with identical structures.
    • Benefits: Queries are faster and simpler, solves the compound index problem, can search by UDF.
    • Downsides: Table versioning is more cumbersome, lots of extra tables, difficult to index across similar entity types (e.g. purchase orders and sales orders)
  • Binary blob UDF column in every entity's table.
    • Benefits: Really fast: no fetch overhead, no extra JOINs.
    • Downsides: Can't search by UDF, extra proprietary code required.
  • XML column in every entity's table.
    • Benefits: Fast like binary blobs, seems to be indexable.
    • Downsides: No idea how they work - looks complicated!

So, do any of these ideas have merit? Is there a better alternative?

How to install SQL Server 2008 R2 Profiler

Posted: 08 Apr 2013 02:31 PM PDT

SQL Server Management Studio doesn't have it on the Tools menu in my case, as is reported in some blogs and forums. Apparently you somehow either got it or not, but how did you get it, if you did, is a luck of the draw, or something? I am a MSDN Subscriber.

Is is possible to install the Profiler using some well-defined steps? Like download X, press menu item Y and click button Z sort of prescription?

[Edit additional info] Upgrading from SQL Server 2005 Enterprise Edition to Server 2008 R2 Enterprise Edition didn't help (previously, of 2008 R2 editions I had only Express Edition). I am trying Setup program's various options but am unable to upgrade from Basic Tools (originally installed with 2008 R2 Express Edition) to Complete Tools, which I hope would contain the Profiler. Complete Tools option can't be checked in any variation of Setup options that I am trying.

Simplified/automated datafile free disk space reclaim

Posted: 08 Apr 2013 09:10 AM PDT

On Oracle version 11g:

After Googling, I can't find a simple way to reclaim free space after deleting a table.

I've found a lot of explanations, telling how the datafile becomes fragmented, the big stack of boring queries you have to run in order to move the "empty space" at end of the datafile (table by table... even when you have 200 tables !?).

Then you have to reduce the datafile size by "guessing" by how much you can reduce it, or you must know exactly what is your "block size"... And finally you should not forget to "rebuild the indexes".

See for example : http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:54178027703899

and http://www.oracle-base.com/articles/misc/ReclaimingUnusedSpace.php

Is there a simple PL/SQL procedure which, given a tablespace name or datafile name, would to that job? Or any similar Oracle tool?

How do I create an index to speed up an aggregate LIKE query on an expression in postgres 8.4?

Posted: 08 Apr 2013 11:47 AM PDT

I may be asking the wrong question in the title. Here are the facts:

My customer service folk have been complaining about slow response times when doing customer lookups on the administration interface of our Django-based site.

We're using Postgres 8.4.6. I started logging slow queries, and discovered this culprit:

SELECT COUNT(*) FROM "auth_user" WHERE UPPER("auth_user"."email"::text) LIKE UPPER(E'%deyk%')  

This query is taking upwards of 32 seconds to run. Here's the query plan provided by EXPLAIN:

QUERY PLAN  Aggregate  (cost=205171.71..205171.72 rows=1 width=0)    ->  Seq Scan on auth_user  (cost=0.00..205166.46 rows=2096 width=0)          Filter: (upper((email)::text) ~~ '%DEYK%'::text)  

Because this is a query generated by the Django ORM from a Django QuerySet generated by the Django Admin application, I don't have any control over the query itself. An index seems like the logical solution. I tried creating an index to speed this up, but it hasn't made a difference:

CREATE INDEX auth_user_email_upper ON auth_user USING btree (upper(email::text))  

What am I doing wrong? How can I speed up this query?

No comments:

Post a Comment

Search This Blog