Thursday, April 25, 2013

[how to] I am new to the programming field and I am interested in jdbc - Tips or Advice needed

[how to] I am new to the programming field and I am interested in jdbc - Tips or Advice needed


I am new to the programming field and I am interested in jdbc - Tips or Advice needed

Posted: 25 Apr 2013 07:53 PM PDT

I am currently acquiring my associates degree in software engineering. I have been told in order to be successful in a computer science field you have to specialized in something. After researching several areas I have decided I want to specialize in Java Database Connectivity. It earns a decent income and seems interesting. I would like to know what the field is like. What the job, bosses/managers, pay, etc. I just want to know as much as I can first hand from the people that currently work in the field. What are some tips you would give a new-comer in college and what do you wish you had known before getting into the industry. I have heard the life of a programmer can be strenuous. I have a husband and two children will I ever have time for them again after I actually start working. FYI, I am going to go directly into my bachelor's degree and possibly my master's in software engineering. Any advice is appreciated.

Selecting the first purchase item of each customer

Posted: 25 Apr 2013 09:10 PM PDT

I have a table like this:

CREATE TABLE SALES  (  CID NUMBER(4),  PDATE DATE,  ITEMCODE NUMBER(4),  ITEMNAME VARCHAR2(20)  );    INSERT INTO SALES VALUES (1111,'15-JAN-2011',4001,'LUX');  INSERT INTO SALES VALUES (1111,'17-JAN-2011',4002,'DRY FRUIT');  INSERT INTO SALES VALUES (1111,'20-JAN-2011',4003,'ARIEL');  INSERT INTO SALES VALUES (1111,'03-JAN-2011',4003,'ARIEL');  INSERT INTO SALES VALUES (1111,'04-JAN-2011',4010,'TELENOR CARD');    INSERT INTO SALES VALUES (2222,'05-FEB-2011',4005,'APPLE');  INSERT INTO SALES VALUES (2222,'06-FEB-2011',4001,'LUX');  INSERT INTO SALES VALUES (2222,'09-FEB-2011',4006,'ORS');    INSERT INTO SALES VALUES (3333,'03-FEB-2009',4007,'CHOCOLATE');  INSERT INTO SALES VALUES (3333,'09-MAR-2011',4001,'LUX');  INSERT INTO SALES VALUES (3333,'15-APR-2011',4008,'MILK');  INSERT INTO SALES VALUES (3333,'03-JAN-2011',4010,'TELENOR CARD');    INSERT INTO SALES VALUES (4444,'11-JAN-2011',4009,'TAPAL MIXTURE');  INSERT INTO SALES VALUES (4444,'19-JAN-2011',4010,'SUGAR');  INSERT INTO SALES VALUES (4444,'21-JAN-2011',4001,'LUX');  INSERT INTO SALES VALUES (4444,'05-JAN-2011',4005,'APPLE');  INSERT INTO SALES VALUES (4444,'15-JAN-2011',4001,'LUX');    INSERT INTO SALES VALUES (5555,'01-JAN-2011',4009,'TAPAL MIXTURE');  INSERT INTO SALES VALUES (5555,'03-JAN-2011',4010,'TELENOR CARD');  INSERT INTO SALES VALUES (5555,'22-JAN-2011',4005,'APPLE');  

I need to fetch the first purchased item name by all the id individually.

How can I do this?

SQL Server 2008 R2 Logs

Posted: 25 Apr 2013 05:43 PM PDT

I just want to know if SQL Server 2008 R2 has a default logging scheme for SELECT statements (or any other T-SQL for that matter). If yes, where can I see it. If none, how can I set it.

I tried searching Google, if you could just point me to a source that is already existing. Thank you very much.

Network Databases

Posted: 25 Apr 2013 05:17 PM PDT

I am doing some research on network database e.g. IDMS-type databases. Are there any free/open source network databases available?

TIA

How to determine cause of runtime increase given two query plans with SpillToTempDb warning

Posted: 25 Apr 2013 04:44 PM PDT

Two actual query plans were captured for two executions of one query:

Plan "Fast" took ~1s and occurs about 90% of the time.

Plan "Slow" took ~16s.

Since the graphical plans look identical to me, I dumped the XML versions and performed a diff to look at the text differences to be sure.

There are 2 "SpillToTempDb" warnings on the fast side and four on the slow side. Looking that the 2 extra on the slow side:

One SpillToTempDb warning on "Parallelism (Repartition Streams) Cost 1%":        <SpillToTempDb SpillLevel="0" />    On SpillToTempDb warning on "Parallelism (Distribute Streams) Cost 1%":        <SpillToTempDb SpillLevel="0" />  

The cost is the same (1%) for both the slow and fast cases. Does that mean the warning can be ignored? Is there a way to show "actual" times or costs. That would be so much better! Actual row counts are the same for the operation with the spill.

Besides performing a manual text diff of xml execution plans to find the differences in warnings, how can I tell what the 1500% increase in runtime is actually due to?

Differences except "RunTimeCountersPerThread" lines:

Left file: C:\Users\chrisr\Desktop\fast.sqlplan     Right file: C:\Users\chrisr\Desktop\slow.sqlplan  10                    <ThreadStat Branches="10" UsedThreads="85">       10               <ThreadStat Branches="10" UsedThreads="73">  ------------------------------------------------------------------------  ------------------------------------------------------------------------  19                    <MemoryGrantInfo SerialRequiredMemory="1536" SerialDesiredMemory="10816" RequiredMemory="124224" DesiredMemory="133536" RequestedMemory="133536" GrantWaitTime="0" GrantedMemory="133536" MaxUsedMemory="105440" />       19               <MemoryGrantInfo SerialRequiredMemory="1536" SerialDesiredMemory="10816" RequiredMemory="124224" DesiredMemory="133536" RequestedMemory="133536" GrantWaitTime="0" GrantedMemory="133536" MaxUsedMemory="107840" />      ------------------------------------------------------------------------       276                                                    <Warnings>       277                                                      <SpillToTempDb SpillLevel="0" />       278                                                    </Warnings>  ------------------------------------------------------------------------------------------------------------------------------------------------       630                                                                <Warnings>       631                                                                  <SpillToTempDb SpillLevel="0" />       632                                                                </Warnings>  ------------------------------------------------------------------------  

Should I be looking at the differences in the RunTimeCountersPerThread lines? It appears it is just the order that is different between the Threads.

how to fix error #1062 duplicate entry [closed]

Posted: 25 Apr 2013 12:30 PM PDT

whats causing this error

SQL query:

DELETE FROM  `adminpanel`.`aggrement` WHERE  `aggrement`.`id` =3    MySQL said:     #1062 - Duplicate entry '3' for key 'PRIMARY'   

this is happening when i try to delete a record from my database

Efficient way to perform approximated search?

Posted: 25 Apr 2013 12:20 PM PDT

I have to perform a join between table_a and table_b, using three fields as key being one of them the date of the event, say, both tables have id1, id2 and evnt_time for eache record.

As it happens though, the evnt_time can be displaced in a few seconds between these tables. Thus the join has to behave sort of table_a.id1 = table_b.id1 and table_a.id2 = table_b.id2 and table_a.evnt_time ~ table_b.evnt_time, where:

  • ~ means approximately, given + or - seconds
  • There must be a way to give ~ parameters of how precise should be. e.g.: table_a.evnt_time ~(2) table_b.evnt_time will join table_a.evnt_time with table_b.evnt_time - 2s, table_b.evnt_time - 1s, table_b.evnt_time, table_b.evnt_time + 1s and table_b.evnt_time + 2s.

To tackle the situation, there are a few possibilities I am experimenting, but my doubt is on which architecture should I use to perform a very efficient "approximated join" - these tables are partitioned, sub-partitioned and each sub part may contain billions of records... (although I have a reasonable amount of resources).

For once, I thought of storing the unique sencond of the event (i.e. second it happened since julian calendar) on both sides so the real join (give a "~(2)") could simply look like:

select *  from      table_a,      table_b  where      table_a.id1 = table_b.id1      and table_a.id2 = table_b.id2      and (table_a.evnt_sec = table_b.evnt_sec           or table_a.evnt_sec = table_b.evnt_sec + 1          or table_a.evnt_sec = table_b.evnt_sec + 2          or table_a.evnt_sec = table_b.evnt_sec - 1          or table_a.evnt_sec = table_b.evnt_sec - 2)  

But I am unsure of how efficiently would that perform with the indexes and scans..

This is just an example, I am not stuck to any sort of architecture. Also, I am using Oracle 11gR2.

Will SSIS 2012 work with SQL Server 2008R2 (not integrated)

Posted: 25 Apr 2013 05:20 PM PDT

Our company has SQL Server 2008R2 and plans to upgrade to 2012 in late 2013 or early 2014. I would like to use SSIS 2012 now as we do have the software.

We have a company policy where we cannot run SSIS packages on the database server itself. We run them from another app server, not integrated services.

My question is, if I were to install all 2012 composent for SSIS on our app server would there be any potental issues? I have not used SSIS 2012 yet so I do not know if the OLE DB compoents Id use to point to our SQL 2008 instance would be an issue.

Enforce indirect relationship

Posted: 25 Apr 2013 11:15 AM PDT

Given the following structure enter image description here There's a way to enforce that Table1.id3 only can seted to values in Table4.id3 where Table4.id2 = Table1.id2? Initially I thought in a CHECK constraint, but subqueries are not allowed.

SQL 2012 indirect checkpoint

Posted: 25 Apr 2013 06:15 PM PDT

I am trying to understand where to draw a fine line for Indirect Checkpoint that was introduced in SQL 2012.

Based on my understanding

SQL Server does checkpoint to flush dirty pages out to the disk, but frequency or time interval between checkpoints depends on many factors. It is controlled by server level configuration called recovery interval and default value for this settings is zero which means in a fairly busy system SQL server can perform checkpoint every minute and can result in database recovery time of less than a minute.

SQL 2012 introduced indirect checkpoint – which in turn can allow you to control recovery interval for individual databases.

At a high level this setting appears to a good thing, it allows to even out disk IO by performing more frequent checkpoints instead of doing a periodic checkpoint which can result in flooding the underlying IO subsystem

Now my questions are

a)Does indirect checkpoint takes into considerations number of dirty buffers instead of number of transaction which usually automatic checkpoint was using?

b)I keep finding in many blogs that indirect checkpoint setting is very dangerous as
It can keep your IO system very busy. Does below statement is true from above blog?

Checkpoint normally writes large blocks of data to disk optimally in single write operation, up to 256KB, depending on the number of contiguous dirty pages in cache that needs to be flushed to disk. Once you turn on the indirect checkpoint feature by setting target recovery time to a non-zero value, the checkpoint writes will turn into single page writes, 8KB writes.

c) Does indirect checkpoint setting is more suitable for Data Warehouse vs OLTP type workload? What scenarios you consider for SQL database in question before you start leveraging indirect checkpoint?

Transactional Replication Concerns

Posted: 25 Apr 2013 12:26 PM PDT

We are debating an architecture that employs sql server 2012 and transaction replication. Idea is to offload reporting activity to a secondary server and have the ability to include/exclude what is replicated by keeping historic data on the reporting server.

Couple of questions:

  1. Does the transaction replication have any load on the primary server(pardon my use of generic terms) when its moving data out.
  2. If the subscriber server goes down, will the primary still function

Thanks!

best relationship for this design?

Posted: 25 Apr 2013 12:57 PM PDT

There is a table users with autoincrement primary key id.

There is also a table employees (every employee is a user but not vice versa). The employees primary key is userid. The field userid is not autoincrement.

employees.userid should be a foreign key for users.id.

Which kind of relationship in Workbench it should be?

As far as I understand:

  • It cannot be 1:1 relationships because for a given user the corresponding employee may not exist.

  • It is not advisable to be 1:n relationship because n can be only 0 or 1.

  • It should be an identifying relation because a user is determined by an employee.

Please advise me what is the best relationship in this case.

Extract part of string based on nth instance of character

Posted: 25 Apr 2013 08:38 PM PDT

I am trying to find a way to extract part of the strings below. I need everything between the 6th and 7th \ character.

\\fileServerA\d$\LiteSpeed\Wednesday\ServerA\Tlog\DBA1_TLOG_20110504_0333.SLS  \\fileServerA\d$\LiteSpeed\Wednesday\ServerB\Tlog\model_TLOG_20120321_1038.SLS  \\fileServerA\d$\LiteSpeed\Saturday\ServerA\TLog\DBA_2_TLOG_20120811_1538.SLS  \\fileServerA\d$\LiteSpeed\Friday\ServerB\Tlog\DB3_TLOG_20120914_2330.BAK  \\fileServerA\d$\LiteSpeed\Wednesday\ServerC\Tlog\DB44_TLOG_20120815_1445.BAK  

Desired results:

ServerA  ServerB  ServerA  ServerB  ServerC  

MySQL (v5.6.11) InnoDB 5.6.11 restoring indexes during table copy

Posted: 25 Apr 2013 08:06 PM PDT

When i see the profile for my optimize table query here is what i see for the InnoDB storage engine 5.6.11:

+----------------------+------------+ | Status | Duration | +----------------------+------------+ | starting | 0.023145 | | checking permissions | 0.003454 | | Opening tables | 0.000054 | | System lock | 0.000056 | | init | 0.000004 | | Opening tables | 0.000726 | | setup | 0.009575 | | creating table | 0.005515 | | After create | 0.000013 | | System lock | 0.073562 | | copy to tmp table | 999.999999 | | rename result table | 1.218323 | | end | 0.000026 | | Opening tables | 0.000625 | | System lock | 0.424206 | | query end | 0.000010 | | closing tables | 0.000004 | | freeing items | 0.000095 | | cleaning up | 0.000040 | +----------------------+------------+

I have one primary key and one secondary key. So since i don't see the index being restored in the profile i assumed that the index is being restored along side the table copy. This is very far from optimal right ? Why hasn't this been handled in InnoDB ? Or am i missing something here. Please let me know. Thanks.

WITH ROLLUP WHERE NULL = x

Posted: 25 Apr 2013 11:10 AM PDT

I tried to find this question somewhere else on here, and found a couple that were similar to it -- but all the answers seemed pretty convoluted, and I think it was surprising to me because I just imagined that SQL would have a way easier way to do this.

Essentially, I'm using a GROUP BY to group two values on top of each other. The coding is simple and works well, where I get these results using WITH ROLLUP at the end:

Type    Person  Count  Type A  Person 1    10  Type A  Person 2    91  Type A  Person 3    13  Type A  Person 4    10  Type A  Person 5    2  Type A  NULL        126  Type B  Person 6    16  Type B  Person 7    63  Type B  Person 8    15  Type B  Person 9    22  Type B  Person 10   16  Type B  Person 11   3  Type B  Person 12   20  Type B  Person 13   44  Type B  NULL        198  NULL    NULL        360  

Is there an easy way to substitute something saying if "NULL" is in the "Person" column, then make NULL AS "Subtotal" and if "NULL" is in the "Type" column, NULL AS "Grand Total?"

And if that's weirdly complicated, I'd be totally fine with just calling all of them "Total."

Thanks!

Exporting schema data (tables, permisions) into another schema on same database

Posted: 25 Apr 2013 10:55 AM PDT

I want to take data from one schema schema1 and put it into schema2 this should be quite easy to do but i have searched through the oracle documentation and can't find a way to do it.

MySQL: Error reading communication packets

Posted: 25 Apr 2013 05:31 PM PDT

I get this warning in mysql,

[Warning] Aborted connection 21 to db: 'MyDB' user: 'MyUser' host: 'localhost' (Got an error reading communication packets)  

I have been through few topics in google and according to some suggestion I increased the max_allowed_packet from 128 to 512 to 1024 still the same behaviour.

I am using Drupal 7, and yes there are lots of blob data types, but 1024 Mb of max_allowed_packet should be enough in my opinion.

Any other workaround how to overcome this warning ?

Oracle Connection Manager Configuration

Posted: 25 Apr 2013 11:40 AM PDT

I've a problem with Oracle Connection Manager and I've read the Oracle topic about it at OCM but it doesn't work well


An image Inside Diagram describing the diagram I want to configure

The IP for each machine is

  1. cmhost1 : 192.168.0.153
  2. dbhost1 : 192.168.0.150
  3. client1 : 192.168.0.159

And these are the files for each machine

1.cmhost1(Proxy server) 1 file called cman.ora

CMAN=    (CONFIGURATION=      (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.0.153)(PORT=1521))      (RULE_LIST=        (RULE=(SRC=192.168.0.159)(DST=192.168.0.150)(SRV=orcl)(ACT=accept)          (ACTION_LIST=(AUT=on)(MCT=120)(MIT=30)))        (RULE=(SRC=192.168.0.153)(DST=192.168.0.153)(SRV=orcl)(ACT=accept)))      (PARAMETER_LIST=        (MAX_GATEWAY_PROCESSES=8)        (MIN_GATEWAY_PROCESSSES=3)        (REMOTE_ADMIN=YES)))  

2.dbhost1(original server) 3 file, and for each one I've added some lines as following

tnsnames.ora

ORCL =    (DESCRIPTION =      (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))      (CONNECT_DATA =        (SERVER = DEDICATED)        (SERVICE_NAME = orcl)      )    )  

listener.ora

LISTENER =    (DESCRIPTION_LIST =      (DESCRIPTION =        (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))        (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))      )    )  

init.ora

dispatchers = "(PROTOCOL=TCP) (MULTIPLEX=ON))"  dispatchers = "(PROTOCOL=SPX) (MULTIPLEX=ON))"  

3.client1 (the client) has one file tnsnames.ora

orcl=    (DESCRIPTION=      (SOURCE_ROUTE=YES)         (ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.153)(PORT=1521))         (ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.150)(PORT=1521))      (CONNECT_DATA=(SERVICE_NAME=orcl))    )  

The Problems :

  1. ORA-12541: TNS: no listener : where I actually define a listener on the server(dbhost1) and It's running from the services

  2. ORA-12154: could not resolve the connect identifier specified: where I actually define my DB service (orcl) in the tnsname.ora

Any Help will be appreciated

Database migration, how to

Posted: 25 Apr 2013 11:22 AM PDT

I have two databases DB1 and DB2. In both databases exists this two tables mo_sms and mt_sms. This is the structure of those tables:

CREATE TABLE IF NOT EXISTS `mo_sms` (    `id_MO` int(11) unsigned NOT NULL AUTO_INCREMENT,    `sms_proceso` char(1) NOT NULL COMMENT 's=SMS c=CHAT d=Descargas',    `msisdn` varchar(20) NOT NULL,    `texto_sms` varchar(160) DEFAULT NULL,    `brand_id` int(10) unsigned NOT NULL,    `fecha_sms` datetime NOT NULL,    `comando_id` int(10) unsigned NOT NULL DEFAULT '0',    `alias_desc` varchar(25) DEFAULT NULL,    `shortcode_id` int(10) unsigned NOT NULL,    `precio` float(11,2) unsigned DEFAULT '0.00' COMMENT 'Precio del MO',    `id_user` int(10) unsigned NOT NULL,    `state` char(1) NOT NULL DEFAULT '0' COMMENT '0=Por procesar 1=Procesado',    `tipo_sms` tinyint(4) NOT NULL DEFAULT '0' COMMENT '0=Normal, <>0 dependera del tipopredeterminado',    `anio_sms` smallint(4) unsigned NOT NULL DEFAULT '0',    `mes_sms` smallint(2) unsigned zerofill NOT NULL DEFAULT '00',    PRIMARY KEY (`id_MO`),    KEY `ix_carrier` (`brand_id`),    KEY `ix_fecha_sms` (`fecha_sms`),    KEY `ix_fecha_carrier_keyword` (`fecha_sms`,`brand_id`,`alias_desc`),    KEY `ix_msisdn` (`msisdn`),    KEY `ix_sms_proceso` (`sms_proceso`),    KEY `ix_sms_proceso_state` (`sms_proceso`,`state`),    KEY `ix_id_user` (`id_user`),    KEY `ix_fecha_sms_user` (`fecha_sms`,`id_user`),    KEY `ix_varios` (`anio_sms`,`mes_sms`,`comando_id`,`shortcode_id`,`brand_id`)  ) ENGINE=MyISAM  DEFAULT CHARSET=utf8 COMMENT='Tabla de MO' AUTO_INCREMENT=82972 ;    CREATE TABLE IF NOT EXISTS `mt_sms` (    `id_MT` int(11) unsigned NOT NULL AUTO_INCREMENT,    `sms_proceso` char(1) NOT NULL DEFAULT 'c' COMMENT 's=SMS c=CHAT d=Descargas',    `msisdn` varchar(20) NOT NULL,    `texto_sms` varchar(160) DEFAULT NULL,    `brand_id` int(10) unsigned NOT NULL,    `fecha_sms` datetime NOT NULL,    `comando_id` int(10) unsigned NOT NULL DEFAULT '0',    `alias_desc` varchar(25) DEFAULT NULL,    `shortcode_id` int(10) unsigned NOT NULL,    `id_user` int(10) unsigned NOT NULL,    `tipo_sms` tinyint(4) unsigned NOT NULL DEFAULT '0' COMMENT '0=Normal, <>0 dependera del tipopredeterminado',    `id_MO` int(11) unsigned NOT NULL,    `state` char(1) DEFAULT '0' COMMENT '0=Por Procesar 1=Procesado',    `anio_sms` smallint(4) unsigned NOT NULL DEFAULT '0',    `mes_sms` smallint(2) unsigned zerofill NOT NULL DEFAULT '00',    PRIMARY KEY (`id_MT`),    KEY `ix_carrier` (`brand_id`),    KEY `ix_fecha_sms` (`fecha_sms`),    KEY `ix_fecha_carrier_keyword` (`fecha_sms`,`brand_id`,`alias_desc`),    KEY `ix_msisdn` (`msisdn`),    KEY `ix_sms_proceso` (`sms_proceso`),    KEY `ix_id_user` (`id_user`),    KEY `ix_fecha_sms_user` (`fecha_sms`,`id_user`)  ) ENGINE=MyISAM  DEFAULT CHARSET=utf8 COMMENT='Tabla de MT' AUTO_INCREMENT=93853;  

I have some values in DB2 that should be migrated (inserted) in DB1. My problem here is I don't know how to get the records from DB2.mo_sms tables and those relations from DB2.mt_sms and then insert to DB1.mo_sms and DB2.mt_sms. For example I'll get all the records from mo_sms with fecha_sms LIKE '%2013-04-19%' then if I insert those records in DB1.mo_sms new IDs will be generated then the integrity will be lost. Should I make this programatically or there is any way to do this using just SQL queries? I'm using MySQL as DBMS.

SWITCHOFFSET() not working in SQL Server 2008 R2

Posted: 25 Apr 2013 09:05 PM PDT

We have been encountering problem in the builtin function switchoffset() in SQL Server 2008 R2. The error says:

The Timezone provided to built-in function switchoffset() is invalid.; 4200.

Our code works fine in SQL Server 2012, so I don't think we have problems in syntax. Does anyone here know any relative CU or SP's of SQL Server 2008 R2 that tends to fix this issue? Actually we have development machines that the code is working fine and it is running in SS 2008 R2 but it has update. We have tried to replicate the CU and SP's of that machine but we are very unfortunate.

This is the part of the query that does not work:

WITH   e (Epoch, UtcDate) AS  (  SELECT   e1.Epoch / 3600 * 3600 AS Epoch   , todatetimeoffset(dateadd(s, e1.Epoch / 3600 * 3600, '1970-01-01'), 0) AS UtcDate  FROM Epoch e1  UNION  SELECT   max(e2.Epoch) / 3600 * 3600 + 3600 AS Epoch   , todatetimeoffset(dateadd(s, max(e2.Epoch) / 3600 * 3600 + 3600, '1970-01-01'), 0) AS UtcDate  FROM Epoch e2  ),  sQ1 AS  (  SELECT    e.Epoch    , e.UtcDate    , switchoffset(e.UtcDate, tz.UtcOffset / 60) AS Date -- here    , tz.Name AS TimeZoneName    , tz.DST AS TimeZoneDst    , tz.Abbrevation AS TimeZoneAbbrevation    , tz.UtcOffset  FROM e  INNER JOIN  CurrentTimeZoneTransition tz  ON tz.StartTransition <= e.Epoch AND e.Epoch < tz.StopTransition  )  

We have also made a solution for this. We made the UNION to UNION ALL and it is working fine but with large data only. When there is small data it gives again the error.

why would command text not execute at all?

Posted: 25 Apr 2013 05:04 PM PDT

in my report i have::

<CommandText>SELECT         column1,          column2,           'poop'  from mytable  </CommandText>  

i am using sql server profiler to see exactly what statement is being set.

i have set only two filters:

  1. databaseName
  2. enter image description here

yet after running the report, no statement gets intercepted.

i suspect that because i am a beginner at SSRS, i am missing something crucial here.

for what reason would commandtext not be executed at all?

i did follow this question, to make sure that i am using sql profiler correctly, and indeed, i am: http://stackoverflow.com/questions/9107383/sql-server-profiler-capture-calls-to-your-databases-stored-procs-during-ssrs

another bit of important information is although the chart shows no data:

enter image description here

i actually am indeed showing data when i run the commandtext from ssms!

loading a csv file which is on local system in to Mysql DB which is on remote server

Posted: 25 Apr 2013 01:04 PM PDT

Can we directly load a CSV file ( which is on the local system) on MYSQL DB ( which is installed on the Remote server ) ?

'load data infile into table name' command can only be used for loading in local system only.

AWS performance of RDS with provisioned IOPS vs EC2

Posted: 25 Apr 2013 11:04 AM PDT

Has anyone done a performance comparison of AWS RDS with the new provisioned IOPS vs EC2? I've found plenty of non-high IOPS RDS vs EC2 but nothing with the new high IOPS feature in RDS.

sp_startpublication_snapshot Parameter(s)

Posted: 25 Apr 2013 03:04 PM PDT

I am creating a stored procedure that:

  1. Restores a DB from a .bak giving the .mdf and .ldf a new name (so we have have several copies of the same DB up
  2. (If specified in the SP's parameter) Creates three merge replication publications
  3. (What I need help doing) Generating the snapshots for the three publications using sp_startpublication_snapshot

Here is my new brick wall... On this DB server, I have a 'shell' db that they will be running the SP from, that has a history table so I can keep track of who created/deleted databases using my SP's... The only parameter for sp_startpublication_snapshot is @publication... I can give it the publication name, but since I am not running it from the publishing database, how do I specify the publishing database?

i.e.: the publication shows up as:

[WC48_Database1]: upb_Inspection_PrimaryArticles  

but I am running the script from the database [WC_QACatalog]

Any ideas about how to accomplish this?

Thank you, Wes

query processor ran out of internal resources and could not produce a query plan

Posted: 25 Apr 2013 02:11 PM PDT

This is showing up in the logs several times a night. How do I find the query causing the issue? SQL Server 2008 R2 Sp1.

Thank you

Binlog has bad magic number

Posted: 25 Apr 2013 09:04 PM PDT

I keep getting this error whenever I start MySQL.

121028  1:38:55 [Note] Plugin 'FEDERATED' is disabled.  121028  1:38:55 InnoDB: The InnoDB memory heap is disabled  121028  1:38:55 InnoDB: Mutexes and rw_locks use Windows interlocked functions  121028  1:38:56 InnoDB: Compressed tables use zlib 1.2.3  121028  1:38:56 InnoDB: Initializing buffer pool, size = 16.0M  121028  1:38:56 InnoDB: Completed initialization of buffer pool  121028  1:38:56 InnoDB: highest supported file format is Barracuda.  121028  1:38:57  InnoDB: Waiting for the background threads to start  121028  1:38:58 InnoDB: 1.1.8 started; log sequence number 3137114  121028  1:38:58 [ERROR] Binlog has bad magic number;  It's not a binary log file that can be used by this version of MySQL  121028  1:38:58 [ERROR] Can't init tc log  121028  1:38:58 [ERROR] Aborting    121028  1:38:58  InnoDB: Starting shutdown...  121028  1:38:58  InnoDB: Shutdown completed; log sequence number 3137114  121028  1:38:58 [Note] C:\PROGRA~2\EASYPH~1.1\MySql\bin\mysqld.exe: Shutdown complete  

I have already tried this.

I have an EasyPHP 12.1 setup on Windows 7x64 PC.

What is the recommended way to backup a MySQL/Amazon RDS database to S3?

Posted: 25 Apr 2013 02:47 PM PDT

I have two purposes for this:

  1. To have an offsite backup in case of a region wide problem with Amazon Web Services.
  2. To copy production data from a production billing account to a beta billing account.

Currently it does not appear that Amazon supports either of these two use cases out of the box.

I have seen mysqldump and xtrabackup (see form post) mentioned.

I have also seen a more complicated process (documented here)

  1. A new RDS server in the source billing account (prod) is spun up off a recent backup.
  2. A new EC2 instance is spun up that has access to the RDS server in step 1.
  3. mysqldump is used to make a backup of this database.
  4. Backup is copied to an offsite location (S3?).
  5. In a separate account and/or region, a fresh RDS server is spun up.
  6. Database dump is imported.

Tips and suggestions are appreciated.

No comments:

Post a Comment

Search This Blog