Tuesday, March 26, 2013

[how to] Update oracle sql database from CSV

[how to] Update oracle sql database from CSV


Update oracle sql database from CSV

Posted: 26 Mar 2013 08:43 PM PDT

I tried google at first but no luck. Is it possible to update tables from csv file? I am using SQL developer and I am using a script to export edited rows to a csv file. I would like to update the edited rows through that csv file from a client. I don't want to import the whole file as the client already has a mirror table, I just would like to update the data from what it is in the csv file. Is this possible?

If not what would be the best approach?

SQL Server Restore from one database to another

Posted: 26 Mar 2013 05:58 PM PDT

One of our devs backed up a dev database, and then restored it in production. It's a new database for a new app that was deployed last night intentionally to prod.

Now in the backupset table (msdb.dbo.backupset) on prod, I can see a record for the dev database, with a backup start date time of when the restore was done.

Record from prod backupset table.....

name: DatabaseName_UAT-Full Database Backup

server_name: COMPNAME-SQLDEV02

machine_name: COMPNAME-SQLDEV02

I would not expect to see this record.... can anyone explain why restore would insert into the backupset table on prod?

Can I delete this record from the msdb.dbo.backupset table? Or not such a good idea?

Thanks heaps.

MongoDB: move documents before capping

Posted: 26 Mar 2013 03:51 PM PDT

The cappedCollection concept works well for most of my projects where cleaning old data without care makes sense.

For another projects, I need a more complex and safe concept. The requirement is nearly the same as logrotate. The data is appended to the main collection, without compression/compact and no index except a timestamp for simple queries by time. => The focus is on writing and persistent data.

Similar to the logrotate rules, I'd like the main collection not to become too large => capped by size; if possible, capping by timestamp might be a plus.

This sounds like a cappedCollection, but I do not want any data loss when it's capped. The old data should be stored into another db's collection that must be compact:true and a non-capped collection. It's name depends on the current month and makes sure that there will be max 12 "archive" collections per year.

Example:

liveDB.mainCollection_capped grows and starts capping.

Before removing old documents, these are savely moved into archiveDB.compactArchiveCollection201303.

No data is lost and the main collection remains small and fast. Storing the data in another database avoids db locks, e.g. repairDatabase tasks on an archive file will not affect or delay the main collection.

Is there a good practice or how to achieve this - as reliable and automated as possible - without writing all the data transfer for a cronjob which handles the data transfer but should never ever be missed because data is lost if capping starts before old data is copied into the archive.

How can I improve my table design for different types of an entity?

Posted: 26 Mar 2013 05:30 PM PDT

Consider an accounting system as an example. I have an Entity called Client. Client can be of different types, with different fields applicable to different types. I consider creating separate tables for different types of Client, each having fields applicable to the respective type and have one master table referencing all of them and have fields applicable to all types.

Currently, I come up with the following design:

enter image description here

But I don't think my design is efficient enough (or even correct and free of errors). What would you suggest? Also, if this is important in any way, I am planning to utilize MariaDB.

FOR XML is generating a empty first line

Posted: 26 Mar 2013 07:09 PM PDT

I'm parsing with flash a XML file generated by this code:

:XML ON    USE MyDatabaseName   GO   SET NOCOUNT ON     SELECT * FROM ProgramacionDia as programa order by hora   FOR XML AUTO,  ROOT ('toflash'),   ELEMENTS     SET NOCOUNT OFF  

But I get a XML file with the first line empty. Removing this first empty line in the generated XML works ok with flash, but with the generated XML no.

How can I remove that line? Is my script wrong? I have no much idea about this code.

I'm running SQL Server 9.0.

Identifying which values do NOT match a table row

Posted: 26 Mar 2013 04:26 PM PDT

I would like to be able to easily check which unique identifiers do not exist in a table, of those supplied in a query.

To better explain, here's what I would do now, to check which IDs of the list "1, 2, 3, 4" do not exist in a table:

  1. SELECT * FROM dbo."TABLE" WHERE "ID" IN ('1','2','3','4'), let's say the table contains no row with ID 2.
  2. Dump the results into Excel
  3. Run a VLOOKUP on the original list that searches for each list value in the result list.
  4. Any VLOOKUP that results in an #N/A is on a value that did not occur in the table.

I'm thinking there's got to be a better way to do this. I'm looking, ideally, for something like

List to check -> Query on table to check -> Members of list not in table

Differences Between Two Different Create Index Commands

Posted: 26 Mar 2013 01:43 PM PDT

Are there differences between these two scripts? Or would all of the extra tokens/attributes (ie: NONCLUSTERED, WITH..., etc...) for the 1st script be defaults in SQL Server 2008 for the 2nd script?

1st script:

CREATE UNIQUE NONCLUSTERED INDEX [DEID_MAP_IDX1] ON [dbo].[DEID_MAP]  (      [VISIT_NUM] ASC  ) WITH     (PAD_INDEX  = OFF,      STATISTICS_NORECOMPUTE  = OFF,      IGNORE_DUP_KEY  = OFF,      ALLOW_ROW_LOCKS = ON,      ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]  

2nd script:

CREATE UNIQUE INDEX [DEID_MAP_IDX1] ON [DEID_MAP]   (      [VISIT_NUM] ASC  );  

FYI: there is ETL code that drops the index with this script before doing a bulk data load, and then finally applying re-creating the index with the 2nd script above.

DROP INDEX [deid_map_idx1] ON [deid_map] WITH ( ONLINE = OFF );  

EDIT:

After applying the simple index above (2nd script), I got this:

SQL Server Management Studio > expanded table > expanded folder "Indexes" > right click index > selected "Script Index as.." > selected "CREATE TO" > selected "New Query Editor Window" > got the following.

CREATE UNIQUE NONCLUSTERED INDEX [DEID_MAP_IDX1] ON [dbo].[DEID_MAP]   (      [VISIT_NUM] ASC  ) WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]  GO  

So it appears that these are added in addition by running the simple statement:

SORT_IN_TEMPDB = OFF

DROP_EXISTING = OFF

ONLINE = OFF

MySQL Pivot Table Naming Conventions

Posted: 26 Mar 2013 01:03 PM PDT

I have some tables (users, jobs, customers) that are each tied to 'groups'. As each table is linked to 'groups', I feel inclined to call the actual group tables user_groups, job_groups and customer_groups; However, if the tables were just 'users' and 'groups', my pivot table would normally be called 'user_groups'.

How would you name these tables?

  1. Users, user_groups and ...
  2. Jobs, job_groups and ...
  3. Customers, customer_groups and ...

If I end up with something like user_groups_pivot, would it be acceptable to call other pivot tables something like: users -> user_roles (the pivot) -> roles rather than users -> user_roles_pivot -> roles?

I'm very fond of the idea of names being predictable.

SQL Server TDE Stuck at Encryption_State 2 and 0% on a Tiny DB

Posted: 26 Mar 2013 03:12 PM PDT

I'm having a heck of a time with this issue and can't figure out what is wrong. I'm not sure how long this tiny DB has been in 'encryption_state = 2' from query:

SELECT * FROM sys.dm_database_encryption_keys  

but it won't budge past 0%. State 2 means that it is currently trying to encrypt the DB. No other DB has any issues, encrypted or unencrypted.

Command:

ALTER DATABASE x SET ENCRYPTION OFF  

Result:

Msg 33109, Level 16, State 1, Line 2
Cannot disable database encryption while an encryption, decryption, or key change scan is in progress.
Msg 5069, Level 16, State 1, Line 2
ALTER DATABASE statement failed.

Running:

ALTER DATABASE x SET ENCRYPTION ON  

actually returns a command completed, but percentage still stays at 0%.

There's no DB corruption from DBCC CHECKDB. There's no locking/blocking going on (not that the logical blocking would affect TDE since it affects the DB on the page level). I'm at a loss short of calling MS on this. Anyone have any ideas? I'm going to try to restore this to a different DB server and test unencrypting the backup there.

Thanks.

SELECT DB_NAME(database_id), percent_complete, *   FROM sys.dm_database_encryption_keys  go  

Edit & update:

Restored to a diff server with the cert, it goes to 1.187865% for 'percent_complete' then immediately reverts back to 0% in about 1 seconds time. Firing up profiler now to catch something in the background perhaps, checking to see if extended events would help.

Oh boy, profiler shows error 824 suspect DB page. DBCC CheckDB consistently shows no errors. Time to brush up on my CHECKDB internals, I know Paul Randal blogs about error 824. Will update this for others who might have this issue.

Optimize Query Execution based on two datetime columns (MySQL)

Posted: 26 Mar 2013 12:22 PM PDT

I've been struggling all day long against this.

So I've got a very busy database (still on development) and we've got records being inserted very frequently. The record logs have a start time and a end time.

So if I want to select something between col1(datetime) and col2(datetime) mysql can't use indexes properly because it will search the indice for col1 but will never look into col2. The database engine is INNODB. What happens is for example that mysql will search 80 thousand rows when the interval requested should only return two rows.

My biggest problem is that I'm trying to do some aggregate functions on this time ranges and its taking a very long time when it should be really fast considering how many rows it is actually counting.

Also note that i cant do dateStart between col1 and col2 nor dateEnd between col1 and col2 because dateStart can be lower than col1 and dateEnd can also be lower than col2.

Lets assume this sample data:

     col1      |     col2  ---------------+---------------  date 10:20:00  |date 10:21:00  date 10:21:00  |date 10:22:00  date 10:22:00  |date 10:23:00  date 10:23:00  |date 10:24:00  date 10:24:00  |date 10:25:00  date 10:25:00  |date 10:26:00  date 10:26:00  |date 10:27:00  

If I need the rows that range between 10:21:30 and 10:25:30 I need to do something like this: '10:25:30' <= col1 AND '10:21:30' >= col2. So how do I index this columns properly? Mysql only picks up one of the date columns in the indice.

Thanks in advance

Invalid rowid error

Posted: 26 Mar 2013 01:01 PM PDT

I'm trying to see, how UPDATE Lock helps to minimize error while dml (delete/update) operations.

declare   cursor update_lock is select empno from emp where deptno=&no for update of sal;   num number;  begin   --for i in update_lock loop   --end loop;   open update_lock;   loop     fetch update_lock into num;     exit when update_lock%notfound;     dbms_output.put_line(num);   end loop;   update emp set sal=sal+10 where current of update_lock;   close update_lock;  end;  

I'm using very simple code to check, how does it works. But, it showing Invalid ROWID. Can anyone help me?

Postgres RIGHT JOIN with custom array

Posted: 26 Mar 2013 05:46 PM PDT

I'm using Postgres 9.1 and want to get a result with some blanks where there is no data. My query looks like the following:

SELECT institution_id FROM ... WHERE institution_id IN (1, 3, 4, 5, 7, 9)  

The ... is not important to this question, it's just important that it returns a result with the institution_ids in the array (1, 3, 4, 5, 7, 9) and it includes those institutions with no data. Here is an example of the current output:

days    treatments    institution_id  266    6996    4  265    5310    1  267    3361    5  260    2809    3  264    5249    7  

An example of the output I want is:

days    treatments    institution_id  266    6996    4  265    5310    1  267    3361    5  260    2809    3  264    5249    7                 9  

I know I can achieve this by using the following query:

SELECT *  FROM (         SELECT institution_id         FROM ...          WHERE institution_id IN (1, 3, 4, 5, 7, 9)       )  RIGHT JOIN generate_series(1,9) ON generate_series = institution_id  WHERE generate_series IN (1, 3, 4, 5, 7, 9)  

However, this is extra work because generate_series(1,9) creates institution_ids I'm not interested in, it requires that I know the max institution_id a priori, and it introduces an unnecessary WHERE clause. Ideally I'd like a query like the following:

SELECT *  FROM (        SELECT institution_id        FROM ...        WHERE institution_id IN (1, 3, 4, 5, 7, 9)       )  RIGHT JOIN (1, 3, 4, 5, 7, 9) ON generate_series = institution_id  

Where (1, 3, 4, 5, 7, 9) is just an array that Postgres will use for the JOIN command. I've also already tried [1, 3, 4, 5, 7, 9] and {1, 3, 4, 5, 7, 9} both to no avail.

Any ideas?

How can I tell if a SQL Server backup is compressed?

Posted: 26 Mar 2013 10:52 AM PDT

We have recently upgraded from SQL Server 2005 to SQL Server 2012. Under SQL Server 2005 there is no option to create compressed backups as there is in 2012.

If you attempt BACKUP DATABASE ... WITH (COMPRESSION); to a file that has already been initialized without compression, the BACKUP DATABASE command will fail with the following error message:

ERROR MESSAGE : BACKUP DATABASE is terminating abnormally.  ERROR CODE : 3013  

How can I tell if an existing backup file is initialized for compressed backups?

Run Multiple Remote Jobs

Posted: 26 Mar 2013 02:04 PM PDT

I need to manually run a job on more than 150 sql server instances (sql server 2000, remote) from a sql server 2005 instance (the local server). The job is the same on all these instances. The job just calls a stored procedure without parameter, which is also the same across all the instances. These jobs are on a schedule. But now they want me to manually run the job for all the instance or for specified instances upon request.

What is the best practice for this? I have tried openrowset to call the remote stored procedure. But each run of the job takes couple of minutes, so if I use a loop to run all these jobs, it will run one by one and that's a long time. Ideally, it should be able to run the stored procedure on each instance without waiting for it to finish. More ideally, it should be able to run the job on each instance without waiting for it to finish, so it can leave a record in the job history on each instance.

And the stored procedure is from a third party so it can't be altered.

Stored Procedures under Source Control, best practice

Posted: 26 Mar 2013 04:13 PM PDT

I am currently using Tortoise SVN to source control a .NET Web Application. What would be the best way to bring our SQL Server stored procedures into Source Control? I am currently using VS 2010 as my development environment and connecting to an off-premise SQL Server 2008 R2 database using SQL Server Data Tools (SSDT).

What I have been doing in the past is saving the procs to a .sql file and keeping this files under source control. I'm sure there must be a more efficient way than this? Is there an extension I can install on VS2010, SSDT or even SQL Server on the production machine?

Secure Linked Server - Non privledged user possible? Registry corruption?

Posted: 26 Mar 2013 04:39 PM PDT

Is it possible to use a non privledged Windows domain account to impersonate itself in a linked server?

And why would it be unable to read the registry for available network protocols?

Overview: Only way I am able to have a scheduled job utilize a linked server is when the local account is mapped to a remote SQL account. Unable to use 'Impersonate.'

Details:

  • Two SQL 2008 R2 Std instances on Win Server 2008 R2 x64
  • One default + one named
  • I'll use Server_A_Default + Server_A_Named to refer to the instances
  • Each instance has it's own AD service account for MSSQL + Agent (4 unique AD accounts in use on server)
  • Port hard coded for Named instance Server_A_Named
  • SPNs created for the 2 MSSQL accounts.
  • SPNs match the default and hardcoded named instance port respectively

Within the named instance (Server_A_Named):

  • Created a linked server on Server_A_Named to Server_B. We'll call the linked server SAN-B.

In SAN-B, I've used SQL Nativue Client 10.0 + OLE DB Provider for SQL

Under the Security for SAN-B, I have 3 accounts:

  • NonPrivADuser
  • ADuserSysAdmin
  • LocalSQLuser

For logins not defined, connections will not be made.

As ADuserSysAdmin, I can click on test connection and it works.

Only way to get linked server to work for NonPrivADuser is to have it map to a local SQL account on Server_B NonPrivADuser has access on Server_B's database as well.

This is the error that NonPrivADuser receives while trying to access the linked server using 'impersonate':

Executed as user: DOMAIN\NonPrivADuser. SQL Server Network Interfaces: Error getting enabled protocols list from registry [xFFFFFFFF]. [SQLSTATE 42000] (Error 65535) OLE DB provider "SQLNCLI10" for linked server "SAN-B" returned message "Login timeout expired". [SQLSTATE 01000] (Error 7412) OLE DB provider "SQLNCLI10" for linked server "SAN-B" returned message "A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online.". [SQLSTATE 01000] (Error 7412). The step failed.

I fired up procmon on Server_A while trying to use the linked server, SAN_B.

SQLAGENT.EXE can read HKLM\SOFTWARE\Microsoft\MSSQLSERVER\Client\SNI10.0 SQLSERVR.EXE receives a BAD IMPERSONATION on the same key.

I fired up regedit and 'users' has read permissions on that key.

Separate SQL Server data from schema

Posted: 26 Mar 2013 05:37 PM PDT

I am facing a kind of strange request:

We have application installations world wide. One of the countries that we wish to do business in has some strict laws regarding the handling of the data, such that it would be advantageous for us to store the data within the boundaries of that country.

So far, nothing weird, right? We will have an instance of our SQL Server database hosted within the borders of said country.

Here's the part that is outside of my knowledge: Someone in management heard that some other firms do this by keeping the schema and indexes in a local location, but storing the data in an encrypted form in the other country.

The only thing that I could think of that might support this would be to put the tables that contain sensitive data in a separate file or file group from the rest of the database. However, in this case, there would be an ocean between the file and the server! I can't imagine that we would get good performance from this sort of arrangement.

Is there anyone out there who has had experience with this sort of request? What technologies can I look at to accomplish this?

Booking system structure

Posted: 26 Mar 2013 09:31 AM PDT

I am making a room booking system for a friends business.

Background: 3 rooms, multiple clients, bookings available 9-5, bookings last 1 hour.

For the database, is this too simple?

Booking record table

Reference | Client ID | Room ID | Timestamp

Client table

Client ID | Name | Phone | Email

Room table

Room ID | Name | Sink | Window | .....

Thanks for your help

Nathan.

Reindexing and its effects on SQL cache

Posted: 26 Mar 2013 01:37 PM PDT

Currently looking into reindexing our sql database, and cannot find any information on whether or not the query cache would be effected. Any help or information would be great. We are using SQL Server 2005 as our DBMS.

SQL Server 2008 search joining query

Posted: 26 Mar 2013 09:03 PM PDT

I have two table as one is the message table and another one is messageUser table. Now i need to check before insert a new row.

for example,

M_Message
MessageId
=========
               1
               2
               3

M_MessageUser
MessageId | MemberId | BusinessId
=========|=========|=========
               1 |               1 |               0
               1 |               0 |               2
               2 |               1 |               0
               2 |               0 |               2
               2 |               3 |               0
               2 |               4 |               0
               3 |               1 |               0
               3 |               0 |               2
               3 |               0 |               4

if a member create a new message, i would like to check the user in this message is there exist before. If yes, then attach the message to previous conversation else create a new conversation.

Scene 1
Member 1 sent a message to Business 2, from the table we know that there have a previous conversation which is Message 1

Scene 2
Member 1 sent a message to Business 2 & Member 3, from the table we know that there is no previous conversation

I've tried before using UNION, IN for the checking query but basically is just get back all the list. Is there anyone can give me a help? Thanks.

UPDATE

I can solve scene 1 by using but failed to suit scene 2

    SELECT MessageId FROM M_MessageUser      WHERE (MemberId IN (0,1) AND BusinessId IN(0,2))      GROUP BY MessageId      EXCEPT      SELECT MessageId FROM M_MessageUser      WHERE (MemberId NOT IN (0,1) OR BusinessId NOT IN(0,2))      GROUP BY MessageId;  

Unique index on 2 columns in mysql

Posted: 26 Mar 2013 03:49 PM PDT

I have one table in mysql named 'UserFriends' where I am updating my websites user's friends details.

here is the schema of the table (UserFriends)

id  int,  Userid int,  friendid int,  createdate timespan  

now, I want to create unique index on userid & friendid. that i have created unique index well. so, right now i am not able to insert same userid and friendid as duplicate. but if i am inserting same value in opposite field it accept without generating error.

example :

insert into userfriends ( userid, friendid )  select 1, 2  --- insert perfect  insert into userfriends ( userid, friendid )  select 1, 2  --- show error because unique index comes in a picture  

now i am inserting

insert into userfriends ( userid, friendid )  select 2, 1  --- records insert here (i don't want this)  

How do i prevent this?

loading a csv file which is on local system in to Mysql DB which is on remote server

Posted: 26 Mar 2013 12:17 PM PDT

Can we directly load a CSV file ( which is on the local system) on MYSQL DB ( which is installed on the Remote server ) ?

'load data infile into table name' command can only be used for loading in local system only.

Why would I NOT use the SQL Server option "optimize for ad hoc workloads"?

Posted: 26 Mar 2013 09:43 AM PDT

I've been reading some great articles regarding SQL Server plan caching by Kimberly Tripp such as this one: http://www.sqlskills.com/blogs/kimberly/plan-cache-and-optimizing-for-adhoc-workloads/

Why is there even an option to "optimize for ad hoc workloads"? Shouldn't this always be on? Whether the developers are using ad-hoc SQL or not, why would you not have this option enabled on every instance that supports it (SQL 2008+), thereby reducing cache bloat?

Avoiding performance hit from GROUP BY during FULLTEXT search?

Posted: 26 Mar 2013 08:51 AM PDT

Is there any clever way to avoid the performance hit from using group by during fulltext search?

SELECT p.topic_id, min(p.post_id)   FROM forum_posts AS p   WHERE MATCH (p.post_text) AGAINST ('baby shoes' IN BOOLEAN MODE)  GROUP BY p.topic_id  LIMIT 20;  

In this example it's fetching the lowest post_id for unique topic_ids that match the text.

With the group by to find the min, it's taking 600ms in a million row database, with about 50K rows examined.

If I remove the MIN but leave the GROUP BY, it's the same slowness, so it's the GROUP hit.

I suspect this is because it can only use one index, the fulltext ?

key: post_text | Using where; Using temporary; Using filesort    Query_time: 0.584685  Lock_time: 0.000137  Rows_sent: 20  Rows_examined: 57751  Full_scan: No  Full_join: No  Tmp_table: Yes  Tmp_table_on_disk: No  Filesort: Yes  Filesort_on_disk: No  Merge_passes: 0  

Without the GROUP BY it's 1ms so this has to be filesort speed?

(I've removed ORDER BY and everything else to isolate where the hit is)

Thanks for any insight and ideas.

(using MyISAM under mariadb if it matters)

AWS performance of RDS with provisioned IOPS vs EC2

Posted: 26 Mar 2013 10:22 AM PDT

Has anyone done a performance comparison of AWS RDS with the new provisioned IOPS vs EC2? I've found plenty of non-high IOPS RDS vs EC2 but nothing with the new high IOPS feature in RDS.

sp_startpublication_snapshot Parameter(s)

Posted: 26 Mar 2013 02:51 PM PDT

I am creating a stored procedure that:

  1. Restores a DB from a .bak giving the .mdf and .ldf a new name (so we have have several copies of the same DB up
  2. (If specified in the SP's parameter) Creates three merge replication publications
  3. (What I need help doing) Generating the snapshots for the three publications using sp_startpublication_snapshot

Here is my new brick wall... On this DB server, I have a 'shell' db that they will be running the SP from, that has a history table so I can keep track of who created/deleted databases using my SP's... The only parameter for sp_startpublication_snapshot is @publication... I can give it the publication name, but since I am not running it from the publishing database, how do I specify the publishing database?

i.e.: the publication shows up as:

[WC48_Database1]: upb_Inspection_PrimaryArticles  

but I am running the script from the database [WC_QACatalog]

Any ideas about how to accomplish this?

Thank you, Wes

Binlog has bad magic number

Posted: 26 Mar 2013 08:51 PM PDT

I keep getting this error whenever I start MySQL.

121028  1:38:55 [Note] Plugin 'FEDERATED' is disabled.  121028  1:38:55 InnoDB: The InnoDB memory heap is disabled  121028  1:38:55 InnoDB: Mutexes and rw_locks use Windows interlocked functions  121028  1:38:56 InnoDB: Compressed tables use zlib 1.2.3  121028  1:38:56 InnoDB: Initializing buffer pool, size = 16.0M  121028  1:38:56 InnoDB: Completed initialization of buffer pool  121028  1:38:56 InnoDB: highest supported file format is Barracuda.  121028  1:38:57  InnoDB: Waiting for the background threads to start  121028  1:38:58 InnoDB: 1.1.8 started; log sequence number 3137114  121028  1:38:58 [ERROR] Binlog has bad magic number;  It's not a binary log file that can be used by this version of MySQL  121028  1:38:58 [ERROR] Can't init tc log  121028  1:38:58 [ERROR] Aborting    121028  1:38:58  InnoDB: Starting shutdown...  121028  1:38:58  InnoDB: Shutdown completed; log sequence number 3137114  121028  1:38:58 [Note] C:\PROGRA~2\EASYPH~1.1\MySql\bin\mysqld.exe: Shutdown complete  

I have already tried this.

I have an EasyPHP 12.1 setup on Windows 7x64 PC.

No comments:

Post a Comment

Search This Blog