Wednesday, September 18, 2013

[how to] How to UPDATE a table for the level of parents in a column?

[how to] How to UPDATE a table for the level of parents in a column?


How to UPDATE a table for the level of parents in a column?

Posted: 18 Sep 2013 09:06 PM PDT

I have a table with internal parents as

id    parent    parent_level  1     NULL      1  2     1         2  3     1         2  4     3         3  

where parent has a FK to id.

How can I update the table to assign the values of parent_level, according to the parent, and parent of parent of each row.

Where can I see the log in MSSQL 2008 R2 [duplicate]

Posted: 18 Sep 2013 07:33 PM PDT

This question already has an answer here:

I want to know if is possible that MSSQL 2008 r2 has a log, when someone run an alter table or modify a SP MSSQL save this action and who did it, if this is possible how can I see the log. Sorry for my English

What am I doing wrong in mysql database?

Posted: 18 Sep 2013 08:56 PM PDT

I'm creating a trigger that deletes rows from a table when they are inserted. However, when the trigger executes I get the following message:

MYSQL ERROR: Can't update table 'player_viplist' in stored function/trigger because it is already used by statement which invoked this stored function/trigge

What do I need to do for this trigger to work?

DROP TRIGGER dele;    DELIMITER |    CREATE DEFINER = 'root'@'localhost' TRIGGER dele    AFTER INSERT    ON player_viplist    FOR EACH ROW  BEGIN DELETE FROM player_viplist ;     END|    DELIMITER ;  

Several stays: many-to-many relationship

Posted: 18 Sep 2013 07:36 PM PDT

Could you please help me with the following problem? I'm a total novice in DB design.

I have a table with researchers who visit our university. They come for some period of time (from, till). Now, how can I save in the database if somebody came several times? I have an entry already for a particular person and need to add some other stays and be able to see them in a form.

My idea was: this must be a many-to-many relationship. Ok. Then I need a third table Researcher/Stay. And then I don't know what to do next. In Stay-table "from+till" is a key. I haven't seen examples with double-value key in a many-to-many relationship and don't know what to do next.

enter image description here

FOR loop oracle query with calulation

Posted: 18 Sep 2013 03:36 PM PDT

I have a list of distinct areas, and would like to track statistics for those districts in a SQL statement. I currently have a setup that runs some summary information, but it doesn't include the whole list.

What I'd like to do is Select a distinct array of values from 1 table, then run SQL for each of those values against a few other tables and populate a third table. I've done a fair amount of Googling, and see that I can loop using "LOOP" or feed in the distinct values via a CURSOR.

Here's my SQL I'm using now, but if there isn't a dispatchgroup that is in the current outages table, it doesn't show statistics for that dispatch group.

 select a.DISPATCHGROUP, a.WEATHER, SUM(a.NUM_CUST) as out_cust, count(a.eventnum) as events,  (select count(*) from v_outcall_rpt b, v_current_outages_rpt c where b.isopen ='T' and b.dgroup = a.dispatchgroup )  as calls,   (select count(*) from v_current_outages_rpt c where c.dispatchgroup = a.dispatchgroup  and c.event_type_code = 'TOEL')   as ISOLATED,   (select count(*) from v_current_outages_rpt c where c.dispatchgroup = a.dispatchgroup and (c.event_type_code = 'TOTO' or c.event_type_code = 'TOTU' or c.event_type_code = 'TOVT' or c.event_type_code = 'TOPM'))   as TSTATION,   (select count(*) from v_current_outages_rpt c where c.dispatchgroup = a.dispatchgroup and (c.event_type_code = 'TOFS' or c.event_type_code = 'TOSC' or c.event_type_code = 'TORL' or c.event_type_code = 'TOLC' or c.event_type_code = 'TOSW'))  as lateral,  (select count(*) from v_current_outages_rpt c where c.dispatchgroup = a.dispatchgroup and (c.event_type_code = 'TOCR' or c.event_type_code = 'TOSB' or c.event_type_code = 'TOTL' ))   as mainline,   (select count(*) from v_current_outages_rpt c where c.dispatchgroup = a.dispatchgroup and (c.event_type_code = 'TOEE' ))   as emergency,   (select count(*) from v_current_outages_rpt c where c.dispatchgroup = a.dispatchgroup and (c.event_type_code = 'TOEC' ))   as cleanup,   (select count(*) from v_outage_duration_rpt where substr(restore_dts,1,8) = (select SUBSTR(max(view_dts),1,8) from v_current_outages_rpt) and a.dispatchgroup = dgroup) as restored,   (select count(*) from v_outcall_rpt where substr(offtime,1,8) = (select SUBSTR(max(view_dts),1,8) from v_current_outages_rpt) and a.dispatchgroup = dgroup) as callstoday   from v_current_outages_rpt a where weather is not null group by a.DISPATCHGROUP, WEATHER;  

When I try adding the select distinct dgroup from MV_WE_AGENCY_LOOKUP as dispatchgroup it errors out with single query returns more than one row.

Concatenate one-to-many field in single result?

Posted: 18 Sep 2013 02:06 PM PDT

Say I have the following query:

SELECT *   FROM AppDetails, AppTags   WHERE AppDetails.AppID = '1'     AND AppDetails.AppID = AppTags.AppID  

Which gives the following results:

AppID    AppName        AppType    Tag  1        Application1   Utility    Test1  1        Application1   Utility    Test2  1        Application1   Utility    Test3  

How can I modify the query to return something like this:

AppID    AppName        AppType    Tags  1        Application1   Utility    Test1,Test2,Test3  

How to increase MySQL performance with joins

Posted: 18 Sep 2013 03:13 PM PDT

I have a MySQL database with InnoDB engine. Particularly one of my queries became very slow. The query is shown below:

SELECT t1.`ARTREF1`   FROM `lijnen` AS t3     INNER JOIN       (`kortingen` AS t2, `raiproduct` AS t1)     ON (t1.LEV=t3.LEV     AND t2.LEV=t3.LEV    AND t1.KCP1=t3.SPLC    AND t2.SPLC=t3.SPLC     AND t1.LEV=t2.LEV     AND t2.SPLC=t1.KCP1    AND t1.`ARTREF1`='".$xartc."'     AND t2.ALDOC='".$supcodec."'     AND t3.BVNR='".$bvnr."'      )  

How it works: I get through SOAP webservice an XML document. In the XML document I get the article number and supplier number and I query my database to see if the article exists.
It takes nearly 5 to 8 seconds to show the results on screen.

I use indexes on the rows which I search. Every day I purge empty fields to optimize my database How can I increase the speed of the query?

Some database and hardware information:

  • raiproduct: 1.500.000 rows
  • lijnen: 4100 rows
  • kortingen: 1200 rows
  • CPU: 4 cores
  • Processors: Intel 64bit hexacore (6 cores) 2.5 Ghz
  • RAM: 4096 MB
  • SWAP RAM: 1024 MB

I hope someone can help me with this problem.

MDX Crossjoin filtering on different hierarchies from the same dimension is very slow

Posted: 18 Sep 2013 01:42 PM PDT

I have an MDX query that was written by an MDX novice that is excessively slow (the query, that is, not the MDX novice). And I am also an MDX novice. Here is the query:

SELECT  NON EMPTY  (      [Measures].[Status]  )  ON COLUMNS,  NON EMPTY  (      Filter      (           Crossjoin          (              {                  [RiskType].[RiskType].[MemberValue123],                  [RiskType].[RiskType].[MemberValue456]              },              [Trade].[TradeType].[TradeType],              [Expiry].[Expiry].[Expiry].ALLMEMBERS,              [Tenor].[Tenor].[Tenor].ALLMEMBERS,              [YieldCurveCurrency].[YieldCurveCurrency].[YieldCurveCurrency],              [Trade].[TradeBook].[XYZ1],              [Index].[Index].[Index],              [EffectiveStrike].[Effective Strike Name].[Effective Strike Name]          ),          (              [Measures].[Status]          ) > 0      )  )  ON ROWS  FROM [RePro]  WHERE  (      [RiskSet].[RiskSet].[ABC],      [Portfolio].[Portfolio].[XYZ],      [RunDate].[RunDate].[17 Sep 2013]  )  CELL PROPERTIES VALUE  

I have very little knowledge of MDX, but through some trial-and-error, I have found that removing the two [RiskType].[RiskType] entries from the Crossjoin makes the query return quickly. Or, removing both the [Trade].[TradeType] and [YieldCurveCurrency] entries also makes it return quickly. But, obviously, this is changing the query, so is not the solution, but perhaps it provides clues as to where I should look in terms of indexes or suchlike if there is anything like this in SQL Analysis Services (I'm more familiar with SQL Server databases)?

One thing I did try, is to put & before [MemberValue123] and [MemberValue456]. This caused the query to return very quickly with no results. Unfortunately, I don't know if this is correct because the query without this change takes too long to be able to see if there are any results. I don't know what difference & is supposed to make, but is this the obvious answer or does it change the query to be different? It is equally possible that the original writer of this query should have used & anyway, but never got to test the query with a real data set.

Any help would be greatly appreciated.

MS SQL 2012 Express dying randomly

Posted: 18 Sep 2013 11:39 AM PDT

I have a Windows 2008R2 box with MS SQL 2012 Express. The problem is that randomly(sometimes twice a day, sometimes once a month) the SQL Server dies leaving those messages in error log:

Error: 49910, Severity: 10, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.  Error: 17300, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.  Error: 17312, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.  

When I try to start the service again, it says:

Error: 17300, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.  Error: 17312, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.  Error: 33086, Severity: 10, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.  

After reboot of the machine it starts working again. How can I fix this problem?

Restore of database failed

Posted: 18 Sep 2013 12:03 PM PDT

I wanna restore my database and i use sql server 2012 express edition but when i restore my backup file in sql server management studio, i take this error:

System.Data.SqlClient.SqlError: There is insufficient free space on disk volume 'c:\' to create the database. The database requires 84260749312 additional free bytes, while only 47428677632 bytes are available. (Microsoft.SqlServer.SmoExtended)

My .bak file is 8.27 gb and my C disk has 44.1 gb free space. Why do i take this error ? how can i fix this error ? Thanks in advance.

How to transfer logins from sql server 2012 to 2008?

Posted: 18 Sep 2013 11:51 AM PDT

I need to transfer dynamically logins from SQL Server 2012 to SQL Server 2008 and I can't because when I get the hashed password on 2012 does not have a valid format to be used on 2008. I know that SQL Server change its legacy of hashing on 2012. So that the hash is bigger than those of 2008.

Is there a way to perform this?

Regards.

is oracle free to download

Posted: 18 Sep 2013 03:24 PM PDT

I am a Java developer and currently I use PostgreSQL as the RDBMS. However, I would also like to learn Oracle. So can I download a latest version of Oracle (11g I guess) for free ?

I found this link : http://www.oracle.com/technetwork/database/enterprise-edition/downloads/112010-linx8664soft-100572.html

and it seems that it is downloadable. However, I know that Oracle RDBMS (or ORDBMS) is a proprietary one. So how does that work ?

Also, will there be PL/SQL along with the download of Oracle 11g ?

pg_upgrade unrecognized configuration parameter "unix_socket_directory"

Posted: 18 Sep 2013 09:00 AM PDT

I'm trying to upgrade Postgresql from 9.2 to 9.3 in Fedora 18 using this command as the postgres user

$ pg_upgrade -b /bin -B /usr/pgsql-9.3/bin -d /var/lib/pgsql/data -D /var/lib/pgsql/9.3/data/ -j 2 -u postgres  

The error in the log

command: "/bin/pg_ctl" -w -l "pg_upgrade_server.log" -D "/var/lib/pgsql/data" -o "-p 50432 -b -c listen_addresses='' -c unix_socket_permissions=0700 -c unix_socket_directory='/var/lib/pgsql'" start >> "pg_upgrade_server.log" 2>&1 waiting for server to start....FATAL: unrecognized configuration parameter "unix_socket_directory" .... stopped waiting pg_ctl: could not start server

As pointed by a_horse in the comments that parameter was replaced by unix_socket_directories (plural) in 9.3. But the server version being started is the old one 9.2:

$ /bin/pg_ctl --version  pg_ctl (PostgreSQL) 9.2.4  

Any ideas?

When was Torn Page Detection introduced to SQL Server and what is its upgrade behavior?

Posted: 18 Sep 2013 03:22 PM PDT

There are two different options in modern SQL Server for page verify; being TORN PAGE DETECTION and CHECKSUM. NONE is also of course an option.

I believe CHECKSUM was introduced in SQL Server 2005 and that upgrading or restoring a DB from a prior version would maintain its previous page verify method. i.e. there was no implicit upgrade.

I have a production database that went into production using SQL Server 2000, though may have been developed against SQL Server 7.0, and has since moved to a SQL Server 2008 R2 server. Page Verify is set to NONE though I expected it to be TORN PAGE DETECTION.

I was wondering when TORN PAGE DETECTION became a Page Verify feature and SQL Server, and how it behaves when migrated or upgraded to newer editions.

Lock a MySQL database for upgrade

Posted: 18 Sep 2013 08:54 AM PDT

I've looked around for solutions to lock a whole database, and the only one I found so far is FLUSH TABLES WITH READ LOCK.

This is fine for backups, but unfortunately I can't use that to prevent other accesses to the database while I'm patching my database with schema changes. If I run an ALTER TABLE after FLUSH TABLES WITH READ LOCK, I get the following message:

Can't execute the query because you have a conflicting read lock

Is there a way to just prevent other database connections from accessing the database temporarily while I'm patching the database?

I don't want to actively refuse the other connections, I'd just like to put them "on hold" until the patching is finished (a matter of seconds).

How can I do that?

retrieve data based on a condition

Posted: 18 Sep 2013 11:29 AM PDT

I have 2 tables, TA and TB:

TA  --------  id  name  1   a  2   b  3   c  4   d  5   e  6   f    TB  --------  id  p_id  name  1   1     a  2   1     b  3   1     c  4   2     a  5   2     b  6   2     d  

I need : if p_id = 1 then :

id  name match  1   a    1   2   b    1  3   c    1  4   d    0  5   e    0  6   f    0  

If p_id = 2 then :

id  name match  1   a    1   2   b    1  3   c    0  4   d    1  5   e    0  6   f    0  

Is there a way to achieve Workload Management in MySQL?

Posted: 18 Sep 2013 06:18 PM PDT

Is there a way to set up Workload Management in MySQL (in particular v5.5 on AWS RDS)?

By Workload Management I mean the functionality implemented in e.g. Teradata, RedShift that allows prioritization of queries. In particular, I want to give high priority to tactical, short running queries from applications and give low priority to longer running background queries.

Duration Field in Profiler

Posted: 18 Sep 2013 01:05 PM PDT

I would like to monitor the script which takes more than 25 Sec. When i use filter i am not able to see the duration taken by the script. How can we get the time taken to execute the query.

My Requirement is simple:
Get all the scripts which takes more than 25 Sec(Time taken for execution of each statement is required).

Reasons to Backup (or not backup) QA and Dev Databases

Posted: 18 Sep 2013 01:32 PM PDT

Pretty simple question: is it standard/best practice to backup dev and QA instances of the database?

Assume that the schema information is kept in some form of source control.

How to store object with property of generic type in DB

Posted: 18 Sep 2013 03:37 PM PDT

In my business domain there exists an entity parameter. Beside other properties which are identical for each parameter, they have values. These values can be of different data types. There might be values that are strings, integers, timestamps, IP-Adresses, etc. It is also possible that I need to add another data type later but this should happen not very frequently. New parameters will be inserted frequently. Values of existing parameters will probably be updated several different times. Parameters and values will be read very frequently but only for displaying them to the user, so without any special analytical operation.

For the later code (which will be C++) I was thinking of a parameter base class from which subclasses for each data type will be derived. StringParameter, TimestampParameter, etc.

I was now wondering how to store parameters in an Oracle 10g database. Ideas so far are

  • Single table, string column: A single database table for all parameters where the column type for the value is VARCHAR2. The code can then transform the values to the correct data type by using functions like atoi. Drawback is I would need an extra type column and transforming values back and forth with atoi and itoa is not very safe and neither good maintainable (e.g. if the data type of one parameter changes from int to long).

  • Single table, raw column: A single database table for all parameters where the column type for the value is RAW. The code then just stores and retrieves the property "as-is" in binary. Drawback is I would need an extra type column and changing the data type of a parameter would break backward compatibility (or one would need to perform a database upgrade which converts all old values to the new ones.

  • Multiple tables, one for each data type: A basic parameter table for all common properties and several extra tables only for the value so that the column can be of the correct data type. Drawback: Not all data types from my domain can be mapped to an oracle data type. Additionally I would need a lot of new tables and joins. Also introducing new parameters would need a change in the database schema.

For which option would you go and why? Is there another solution I have not yet thought of?

How to access a SQL Server database from other computer connected to the same workgroup?

Posted: 18 Sep 2013 07:26 PM PDT

I have created a C# application which uses a SQL Server database. I have other computers connected to me and to each other in a workgroup. I have shared my C# application with others. When they open the application they get the error

A network related or instance-specific error occured while establishing a connection to SQL Server. the server was not found or was not accessible

But the application is working fine on my PC. The connection string I am using is

Data Source=ASHISHPC1\SQLEXPRESS;Initial Catalog=ACW;User ID=ash159;Password=ashish159  

which is stored in a .config file.

The application is working fine on my PC. What must I do? I have enabled the TCP/IP in the server but the same error persists. Some change in connection string or something else?

Please help.. Thank you..

Improve performance of transactional replication

Posted: 18 Sep 2013 09:21 AM PDT

I 'm testing implementing transactional replication . The replication works fine if I insert/update handful of rows. The problem occurs when I have a sql job running which deletes/updates & inserts 1000's 15000 plus rows in each database(more than 100) on the server. And this job runs every n minutes. It eventually causes locking issues and slows down replication.

The main objective of doing replication is to able to scale horizontally and also improve performance. Following is a quick overview of test environment

2 database server Each server has 100 unique databases & 5 shared database(which are replicated) Any suggestion how to get around the locking issues?

Thanks

Moving large databases

Posted: 18 Sep 2013 09:21 PM PDT

I have a centos server and /var/lib/mysql/ is 125GB (disk has 1GB free space).

Ordinarily I would use mysqldump to backup the databases, but I don't normally work with such large databases, so I need to know the safest way of copying the databases over to a new server.

All advice appreciated!

How should I set up my social network database design?

Posted: 18 Sep 2013 10:21 AM PDT

I am designing a db for a Social Network type website where users enter lot of information varying from family member details, education, employment, personal favorite such as TV, movie, music, food, books etc using InnoDB and expect exponentially increasing writes but far few reads. I already have 26 tables.
My question is it better to have large number of individual tables for example as TV, movie, music, food, books or put them under one big table as MyPersonal Favorite to reduce the number of tables as I fear that there will be 26 individual disk I/Os to write one persons information with my current design

Database user specified as a definer

Posted: 18 Sep 2013 11:21 AM PDT

I have a view in my database. problem is below

Error SQL query:

SELECT *   FROM `lumiin_crm_prod`.`v_contact`   LIMIT 1 ;  

MySQL said:

1449 - The user specified as a definer ('lumicrm'@'%') does not exist

i Google for a solution

User is created for Host & not for Global.

How to create the User for Global ????

How do I execute an Oracle SQL script without sqlplus hanging on me?

Posted: 18 Sep 2013 04:21 PM PDT

For an automated task I would very much like to run some SQL scripts and make sure that sqlplus does not hang under any circumstancees, i.e.:

  • If the script contains any named substitution variable that has to be entered by the user, sqlplus should return with an error instead of prompting for the variable -- I cannot use set define off, as some of these scripts need to accept command line parameters that have to be resolved with &1
  • The script must not "hang" when it doesn't contain an exit; at the end.

    Solved: I think now that I can achieve this by wrapping the sql-script in a secondary "caller script". I.e., the caller script calls the other script with @ and then has a fixed exit; after that. The other script doesn't need an exit that way.

  • Anything else: If it would require a prompt, it should return with an error.

How can i do this with Oracle (and sqlplus or something else)?

SSRS appears to be ignoring Permissions set using Report Manager

Posted: 18 Sep 2013 07:21 PM PDT

I have setup SSRS on SQL Server 2008 in native mode.

As an administrator I can login to report manager, upload reports and run them, and also use the Web Service URL to generate reports.

I have also created a local user on the machine, I went into Report Manager as Admin, and at the top level set permissions that should assign the local user to all roles.

When I login to the machine as that user, and then navigate to Report Manager I just get the heading for the page, but do not see any of the folders that are configured.

I've checked and the folders are set to inherit parent permissions and they are showing the newly created local user in there too.

It seems odd that I have set the permissions, yet SSRS is still not showing what I should be able to see. Is there another step I need to take other than configuring the permissions in Report Manager?

When logged in as the newly created local user:

Report Manager - Shows the heading for the page, but no folders/items    Web Service URL (http://machine/ReportServer) - rsAccessDeniedError  

Powershell to dump database schema to text file

Posted: 18 Sep 2013 10:07 AM PDT

I want to be able to dump a database schema to a sql text file using Powershell. I want this because our DBAs want to be able to do a diff on the file that is committed to SVN. They won't execute it if they can't see what it contains.

I've found many scripts that backup to a binary .bak file, but I can't find any that dump as a sql text file.

How do you do this in Powershell?

No comments:

Post a Comment

Search This Blog