Thursday, April 18, 2013

[how to] MYSQL score by rank

[how to] MYSQL score by rank


MYSQL score by rank

Posted: 18 Apr 2013 07:52 PM PDT

I am using MYSQL to create a rating system to implement my database. What I want to do is to rate each attribute by its percentage. Here is the example database:

ID, value  1, 3  2, 5  3, 2  4, 5  

The output I want is:

ID, value, rank, score  1, 3, 2, 6.6  2, 5, 1, 10  3, 2, 3, 3.3  4, 5, 1, 10  

score's value will based on the rank so it becomes such as

10*(MAX(rank)-(rank))/(MAX(rank)-MIN(rank))  

I have done the rank query but stuck with transforming it into scores. Here is the query I got so far:

SELECT `ID`, `value`, FIND_IN_SET( `value`, (  SELECT GROUP_CONCAT(DISTINCT `value`   ORDER BY `value` DESC)  FROM table)   ) AS rank  FROM table;  

Thank you all guys :)

MySQL hogging memory

Posted: 18 Apr 2013 07:26 PM PDT

An installation of MySQL 5.6.10 on a virtualized Ubuntu 12.04 is exhibiting massive memory hogging:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND  14019 mysql     20   0 29.0g  17g 8600 S   54 76.7  20:42.64 mysqld  

Usually, I am able to free ~ 3 GB by issuing FLUSH TABLES. The tables used are nearly exclusively InnoDB, the innodb_buffer_pool_size has been set to 10 GB (after setting it to 16 GB quickly depleted the available physical memory and swapped out more than 18 GB of it).

While the system was swapping, I could observe rather high numbers for "swap out" counters (vmstat is showing ~1k pages/second during bursts) and hardly anything at all swapped back in (few dozens of pages per minute). I first suspected memory leakage but have not found anything supporting this hypothesis so far.

What means do I have to identify the possible causes for the apparently unlimited growth?

Equivalent of MRG_MYISAM in databases other than MySQL?

Posted: 18 Apr 2013 05:57 PM PDT

Does anyone know if other database have something equivalent to MRG_MYISAM (aka the MERGE table type/storage engine)?

I know about fragmenting, but this is not quite the same AFAIK. We're using MRG_MYISAM to avoid large amounts of duplicate data across customer specific databases, so MRG_MYISAM is perfect.

That said, I'd like to know if there are equivalent things in other DBs, particularly other open source DBs.

MySQL: logging queries which would execute without using indexes

Posted: 18 Apr 2013 05:24 PM PDT

I am trying to use log_queries_not_using_indexes = 1 to find queries which are not executing optimally on a MySQL server. However, I find the resulting log file of rather limited value. Apparently, queries are logged whenever the optimizer really decided not to use an index as a criterion in a WHERE clause. And not if they truly have no indexes matching the filtered columns.

So given a table with the following structure

CREATE TABLE `test` (      `id_test`   int(11) NOT NULL AUTO_INCREMENT,      `some_text`  varchar(255) DEFAULT NULL,      `some_more_text` text,   PRIMARY KEY (`id_test`)  ) ENGINE=InnoDB  DEFAULT CHARSET=latin1   

a query SELECT id_test from test where id_test != 69 would be logged to the slow log because of not using indexes (the optimizer has decided that a table scan is more efficient as not much could be won by using an index) but SELECT id_test from test where id_test = 69 would not.

I would expect the behavior of the latter query in the first case as well since the index is present. As is, it makes troubleshooting missing indexes rather tiresome. Ideas on how to approach this greatly appreciated.

mysql: need help to optimize my query/table

Posted: 18 Apr 2013 04:27 PM PDT

I'm wondering if someone could help me optimize my tables/query to speed up a query. It is currently running ridiculously slow. I think a well-thought out index could help me. Any help would be really appreciated

Tables URLS and TAGS mentioned below are 2 and 20 million rows respectively (will probably end up having 10x). A query like the one below already takes 10 seconds to run.

An Example: http://whatrethebest.com/php+tutorials

Tables

CREATE TABLE IF NOT EXISTS `TAGS` (  `hash` varchar(255) NOT NULL,  `tag` varchar(255) NOT NULL,  UNIQUE KEY `my_unique_key` (`hash`,`tag`),  KEY `tag` (`tag`)  ) ENGINE=InnoDB DEFAULT CHARSET=utf8;  

and

CREATE TABLE IF NOT EXISTS `URLS` (  `url` text NOT NULL,  `domain` text,  `title` text NOT NULL,  `description` text,  `numsaves` int(11) NOT NULL,  `firstsaved` varchar(256) DEFAULT NULL,  `md5` varchar(255) NOT NULL DEFAULT '',  PRIMARY KEY (`md5`),  UNIQUE KEY `md5` (`md5`),  KEY `numsaves` (`numsaves`)  ) ENGINE=InnoDB DEFAULT CHARSET=utf8;  

QUERY

SELECT urls.md5, urls.url, urls.title, urls.numsaves  FROM urls  JOIN tags ON urls.md5 = tags.hash  WHERE tags.tag  IN (  'php', 'tutorials'  )  GROUP BY urls.md5  HAVING COUNT( * ) =2  ORDER BY urls.numsaves DESC  LIMIT 20  

EXPLAIN

I'm not sure what this shows

id  select_type     table   type    possible_keys   key     key_len     ref     rows    Extra  1   SIMPLE  tags    range   my_unique_key,tag   tag     767     NULL    230946  Using where; Using index; Using temporary; Using filesort  1   SIMPLE  urls    eq_ref  PRIMARY,md5     PRIMARY     767     jcooper_whatrethebest_urls.tags.hash    1     

So I think the problem is:

certain tags like 'php have 34,000 entries, most of which only have under 5 saves. But in order to get the 20 most saved it is having to sort them all.Right?

I can't really create a 'numsaves' column in TAGS and index on that because that number will be changing up and down, and that wouldnt make sense. Is it possible to create a cross-table index between urls.numsaves and tags.tag? Or a third table to use in my query somehow? Would this solve my problem? I know almost nothing about indexing.

Any help would be really appreciated!

Edits: Trying Ypercube suggestions*

I tried making the index but not sure if it finished, is there any way to tell for sure? Here is that explain for your (very nice) query for php + tutorials

id  select_type table   type    possible_keys   key key_len ref rows       Extra  1   SIMPLE  t1  ref my_unique_key,tag_hash_UX   tag_hash_UX 767 const   64962   Using where; Using index; Using temporary; Using filesort  1   SIMPLE  t2  eq_ref  my_unique_key,tag_hash_UX   my_unique_key   1534    jcooper_whatrethebest_urls.t1.hash,const    1   Using where; Using index  1   SIMPLE  u   eq_ref  PRIMARY,md5 PRIMARY 767 jcooper_whatrethebest_urls.t2.hash  1   Using where  

When I run your query in php or phpmyadmin (I know, I know, gross, im new to this) it takes a long long time, but when I run it with explain in front it gives me the number of ROWS very quickly. What could this mean??

I will consider using a ID field. Its a good idea, Would it account for this much slowness though? I didn't think it was neccesary because the order of the rows doesnt matter and a lot will be deleted eventually and they only need to be unique on the hash of the URL .. but I could keep the hash for uniqueness and the other stuff is irrelevant

I'm trying to disable xp_Cmdshell and rpc_out and when i run the commands on query analyzer it shows its diabled

Posted: 18 Apr 2013 04:43 PM PDT

I'm trying to disable xp_Cmdshell and rpc_out and when i run the commands on query analyzer it shows its diabled

But after this i need to run a security scan report which provides me the following report that its not disabled can anyone help me

5 Microsoft SQL Server Database Link Crawling Command Execution

QID: 19824 Category: Database

CVE ID:

Vendor Reference

Bugtraq ID:

Service Modified: 02/20/2013

User Modified:

Edited: No PCI Vuln: Yes THREAT: Microsoft SQL Server is exposed to a remote command execution vulnerability. Affected Versions: Microsoft SQL Server 2005, 2008, 2008 R2, 2012 are affected. IMPACT: Successful exploitation could allow attackers to obtain sensitive information and execute arbitrary code. SOLUTION: There are no solutions available at this time. Workaround: Disable RPC_Out and xp_cmdshell for this issue. COMPLIANCE: Not Applicable EXPLOITABILITY: There is no exploitability information for this vulnerability. ASSOCIATED MALWARE: There is no malware information for this vulnerability. RESULTS: C:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\Binn\sqlservr.exe Version is 2009.100.4000.0

Loading data in mysql using LOAD DATA INFILE, replication safe?

Posted: 18 Apr 2013 04:21 PM PDT

I am trying to load data into mysql database form CSV file. I found that we could use LOAD DATA INFILE command to do it. But as per the mysql documentation it is not replication safe. (See here)

Is there a better way to do it rather than to do it via application?

MySQL Tables Require Daily Repairs - Server, Table or HD?

Posted: 18 Apr 2013 11:39 AM PDT

I've been experiencing a weird issue with one of MySQL DBs. Every day, sometimes 2-3 times per day, I'll need to repair the tables. The MySQL DB has 25 tables with 5.6m rows in total.

The bigger ones are:

Table A - 599k rows / 867MB  Table B - 2.1m rows / 146MB  Table C - 2.2m rows / 520MB  

It seems table C needs to be repaired pretty frequently, Tables A & B not as much.

When the table needs to be repaired, I'm not seeing it being marked as crashed or in use. But through other tools, I can see the data is not what it should be.

When I do repair the table, I'll see a message similar to:

[table c] repair info Wrong bytesec:  54-55-102 at 368251940; Skipped  [table c] repair warning Number of rows changed from 2127934 to 2127931  

or

[table c] repair info Wrong bytesec:  171-30-101 at 341237312; Skipped  [table c] repair warning Number of rows changed from 1984585 to 1984582  

I've tried making adjustments in my.cnf but no difference.

The server is a cloud server running both MySQL and Apache. Plenty of space available on all HDs:

Filesystem            Size  Used Avail Use% Mounted on  /dev/xvda2             99G   14G   80G  15% /  tmpfs                 1.9G     0  1.9G   0% /dev/shm  /dev/xvda1             97M   49M   44M  53% /boot  /dev/xvdc1            296G   25G  257G   9% /data  

I'm not sure if this is a problem with the cloud HD, the server or the tables themselves. The problem didn't start happening until about 2 months ago and the size of the DB has only changed by 300-400MB until now.

Any idea what I should be looking at to verify where the problem might be?

Using MySQL v5.1.66 and MyISAM

Thanks in advance.

Best, Cent

Update "NULL" string to Actual NULL value

Posted: 18 Apr 2013 11:51 AM PDT

I have a table that contains NULL values but the problem is that some of the values are actually string "NULL" and not actual NULLS so when you trying something like

where date is null  

it will not return the fields that have the "NULL" string.

What I am needing to do is run an update of the whole table that will convert all string "NULLS" to the actual NULL value. The "NULL" strings happen throughout all columns of the table so it is not just 1 column that needs to be updated. I am not sure how to approach this scenario. I'm thinking I might need to use a loop since i have many columns but then again there might be a simple solution without having to use a loop. What would be the best way to resolve this issue?

Reinstall MySql but keep database tables and data

Posted: 18 Apr 2013 10:37 AM PDT

Please help!

There are server issues and MySql is no longer running on our server (Ubuntu). The service is not recognized and needs to be reinstalled. Unfortunately, the database has not been backed up for 48 hours and that is a lot of information.

How do I reinstall MySql AND keep all my database data? Please note - I can't access mysql at all. I can't use command line mysql nor phpmyadmin.

Thanks in advance and let me know if I am missing important details.

Need ideas about OPTIMIZE TABLE

Posted: 18 Apr 2013 10:38 AM PDT

Looking at a database with 10 tables and fairly active at by-the-hour changes. On the first of each month, I purge some rows from 3 tables to remove outdated material and keep the size down. All of these tables show highlighted (red) 'Overhead' in phpMyAdmin.

Given these conditions, should the tables be OPTIMIZED just after the purge. If not a good idea, why? (The purge occurs at lowest usage time of day.)

Let's say the tables and current (mid-month) Data & Index sizes are:

          Rows     Deleted     Data      Index    table1   17,000     7000      4.3 MiB    1.2 MiB    Holds +300/day Transaction Info  table2    6,000     1000     25.5 MiB    231 KiB    Holds User Schedule Info  table3    1,800       30      297 KiB     43 KiB    Holds User Info  

This question is in response to learning about indexes and efforts to reduce slow queries and learning about high counts for:

  1. Handler read rnd next
  2. Handler read prev
  3. Created tmp disk tables
  4. Opened tables

In reading articles about these items, it seems to be a 'learned' science and requires testing.

Thanks for responding.

Delete shared memory segments left over by an Oracle instance

Posted: 18 Apr 2013 11:12 AM PDT

We're using Oracle Enterprise 11rR2 running on Solaris.

How can I delete/remove allocated shared memory segments using ipcrm?

I'm getting this error:

 ORA-01041: internal error. hostdef extension doesn't exist  

Pull Subscription: process cannot read file due to OS error 5

Posted: 18 Apr 2013 04:37 PM PDT

I am trying to migrate a working pull subscription for transactional replication from one subscribing server to a new one. The subscribing server is at another site and is connected via a VPN tunnel. The serverName is resolvable via the hosts file.

I am trying to capture the existing configuration precisely, but clearly am missing something.

The error is The process could not read file '\\[server]\repldata\unc\[folder]\[folder]\[file].pre' due to OS error 5. I can RDP into the subscribing server with the distributor connection account and can access the file on the UNC share.

Everyone has permissions to the UNC share and we haven't seen this problem with other subscriptions.

The distribution process account is the SQL Server Agent, which I know is not best practice but matches the configuration of the existing working replication. I temporarily tried using a (local) Windows account

Again, we have tried to configure the subscribing server exactly as the working server. What are we missing? We never saw this error when setting up the previous subscription.

One note: the old subscription is still up and functioning, and uses the same accounts to connect to the distributor. I wonder if Access is Denied could be thrown due to a sharing conflict.

Reducing Log Impact During Re-Indexing

Posted: 18 Apr 2013 12:05 PM PDT

We use Ola's maintenance solution and its great.

Regardless of method for re-indexing a major friction point with IT is the amount of log generated during the weekly re-indexing process. For a 1TB DB upwards of 300 GB of log can be generated. This causes mirroring backlogs/delays and also cause Data Protection Manager to take a long time to sync up with its off-site DPM partner server (sometimes several Days!). As we approach a time where we will have a Second site on warm standby we know that this delay in having off-site backups available during the vulnerable period after Index maintenance could be the Achilles heal. We are considering a larger pipe between the sites for Avail Group but to generate less burst Log activity would be great.

To mitigate this we have done 2 things with only minimal impact. First we spread out the weekly re-indexing by introducing Delays, purposely slowing a 3 hour process to about 8 hours or so. Secondly "some" key tables are maintained by a process that runs hourly resulting in Just in Time re-indexing.

In a large and active OLTP DB with some LOB what are the rule of thumbs for re-indexing frequency, % of database affected, # indexes that should be rebuilt less frequently? Is weekly rebuild overkill?

Time to apply transaction logs: does it matter how many logs?

Posted: 18 Apr 2013 09:56 AM PDT

When restoring from a backup in SQL Server, the procedure is to restore the .bak file and then apply any .trn files since the last full backup.

Does it make a difference how many .trn files there are, if they cover the same transactions? I.e. is it faster or slower restore a 1-hour .trn file vs twelve 5-minute .trn logs?

SQL pre-login handshake connection failure

Posted: 18 Apr 2013 09:56 AM PDT

I'm having an intermittent problem with one of my old SQL 2000 servers. Every once in a while it stops accepting logins. I fix the problem by bouncing sqlservr.exe and then it starts working properly for a few days. For example, running the PowerShell:

$connectionstring = "Server=.;Integrated Security=SSPI;"  $sqlconnection = new-object 'System.Data.SqlClient.SqlConnection'  $sqlconnection.connectionstring = $connectionstring  $sqlconnection.open()  

Produces the error:

Exception calling "Open" with "0" argument(s): "A connection was successfully established with the server, but then an  error occurred during the pre-login handshake. (provider: TCP Provider, error: 0 - The specified network name is no lon  ger available.)"  

Similarly, if I change the connection string to:

$connectionstring = "Server=\\.\pipe\MSSQL`$SMS3000\sql\query;Integrated Security=SSPI;"  

I get a similar error, except the end of the error says: (provider: Named Pipes Provider, error: 0 - The pipe has been ended.)

Checking C:\Program Files\Microsoft SQL Server\80\Tools\Binn\SVRNETCN.exe. Both Named Pipes and TCP/IP are enabled and "Force protocol encryption" is unchecked. And as I mentioned, if I bounce sqlservr.exe then the above commands create a successful connection.

I've checked the certificate store and there are no expired certificates in there. Checking the Windows event logs and SQL Server logs I'm not finding anything remotely useful there.

Lastly, I've run some network traces. For a failed connection I show:

TLS:TLS Rec Layer-1 HandShake: Client Hello.  TCP:Flags=...A.R.., ScrPort=1433, DstPort=18721, PayloadLen=0, Seq=888695317, Ack=3640041213, Win=0 ...  

A successful connection looks like:

TLS:TLS Rec Layer-1 HandShake: Client Hello.  TLS:TLS Rec Layer-1 HandShake: Server Hello. Certificate.  

I've hunted around the registry to try to figure out which certificate SQL is using, but I haven't had any luck there yet. Any ideas on what to look for next?

Single slave - multiple master MySQL replication

Posted: 18 Apr 2013 05:10 PM PDT

I need to replicate different MySQL databases from multiple servers into a single slave server. How can this be done? is there a way to define multiple master hosts?

Is it possible in Oracle to trace SQL statements that result in errors?

Posted: 18 Apr 2013 11:14 AM PDT

We have Oracle 11g in production. Application system is still under active development. It will be very handy to get SQL statements which cause any error.

Does Oracle provide a standard function to trace and log these statements and additional (debug) info?

Do I need client certs for mysql ssl replication?

Posted: 18 Apr 2013 01:51 PM PDT

I'm setting up mysql replication using SSL, and have found two different guides.

The first one creates both client and server certs, while the second one only creates server certs.

I don't know enough about SSL to understand the implication of one option over the other. Should the slave be using the client certs or the server certs?

Custom sp_who/sp_whoUsers

Posted: 18 Apr 2013 03:02 PM PDT

I need to allow a client in a dev DW SQL 2K8R2 environment, to view and kill processes, but I do not want to grant VIEW SERVER STATE to this person (he's a former sql dba and is considered a potential internal threat).

When I run the following, it returns one row as if the user ran the sp themselves with their current permissions.

USE [master]  GO    SET ANSI_NULLS ON  GO  SET QUOTED_IDENTIFIER ON  GO    CREATE PROCEDURE [dbo].[usp_who] with execute as owner  AS  BEGIN      SET NOCOUNT ON;      exec master.dbo.sp_who;  END  

Changing the "with execute as" to "self" (I'm a sysadmin) returns the same results. I've also tried the below instead of calling sp_who, and it only returns one row.

select * from sysprocesses  

It seems that the context isn't switching, or persisting, throughout the execution of the procedure. And this is to say nothing of how I'm going to allow this person to "kill" processes.

Does anyone have a solution or some suggestions to this seemly unique problem?

Need to suppress rowcount headers when using \G

Posted: 18 Apr 2013 10:02 AM PDT

Is there a command to suppress the rowcount headers and asterisks when using '\G' to execute a SQL statement? I am executing mysql with the -s and --skip-column-name options, but these don't suppress the rowcounts.

How can I replicate some tables without transferring the entire log?

Posted: 18 Apr 2013 11:02 AM PDT

I have a mysql database that contains some tables with private information, and some tables with public information.

I would like to replicate only the tables containing public information from one database to another, making sure that NO confidential information ever gets stored on the slave.

I know I can use the replicate-do-table to specify that only some tables are replicated, but my understanding is that the entire bin log is transferred to the slave.

Is there a way to ensure that only the public information is transferred to the slave?

How to search whole MySQL database for a particular string

Posted: 18 Apr 2013 01:02 PM PDT

is it possible to search a whole database tables ( row and column) to find out a particular string.

I am having a Database named A with about 35 tables,i need to search for the string named "hello" and i dont know on which table this string is saved.Is it possible?

Using MySQL

i am a linux admin and i am not familiar with databases,it would be really helpful if u can explain the query also.

multivalued weak key in ER database modeling

Posted: 18 Apr 2013 12:02 PM PDT

I was wondering since i didnt find out any clarification for this. I want to store movies that exist in different formats (dvd, bluray etc) and the price for each format differs from each other as well as the quantity of each format, so i came up with this:

example

Is this correct from a design perspective? Does this implies redundancy? I dont understand how will this be stored in a table. Would it be better to do it like this :

enter image description here

Thanks in advance.

EDIT : I add some more descriptive information about what i want to store in this point of the design. I want to store information about sales. Each movie that exist in the company i need to store format, price and stock quantity. I will also need to store customer information with a unique id, name, surname, address, movies that he/she has already bought and his credit card number. Finally i will have a basket that temporary keeps items (lets suppose that other items exist apart from movies) that the customer wants to buy.

Microsoft Office Access database engine could not find the object 'tableName'

Posted: 18 Apr 2013 04:02 PM PDT

First a little background: I am using MS access to link to tables in an advantage database. I created a System DSN. In the past in Access I've created a new database, and using the exteranl data wizard, successfully linked to tables. Those databases and the linked tables are working fine.

Now I am trying to do the same thing, create a new access db, and link to this same DSN. I get as far as seeing the tables, but after making my selection, I get the error, " The Microsoft Office Access database engine could not find the object 'tableSelected'. Make sure the object exists and that you spell its name and the path name correctly.

I've tried creating another datasource (system and user) with no luck. Environment is Wn XP, Access 2007, Advantage DB 8.1

Foreign Key Constraint fails

Posted: 18 Apr 2013 07:47 PM PDT

I have the following tables:

// Base Scans  CREATE TABLE `basescans` (      `id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,      `name` VARCHAR(100) NULL DEFAULT NULL,      `status_id` INT(10) UNSIGNED NULL DEFAULT NULL,      PRIMARY KEY (`id`),      CONSTRAINT `status_id_fk` FOREIGN KEY (`status_id`) REFERENCES `statuses` (`id`) ON UPDATE CASCADE ON DELETE SET NULL  )  COLLATE='utf8_general_ci'  ENGINE=InnoDB  ROW_FORMAT=COMPACT  AUTO_INCREMENT=29    // Statuses  CREATE TABLE `statuses` (      `id` INT(10) UNSIGNED NULL AUTO_INCREMENT,      `name` VARCHAR(100) NULL DEFAULT NULL,      PRIMARY KEY (`id`)  )  COLLATE='utf8_general_ci'  ENGINE=InnoDB  ROW_FORMAT=DEFAULT  AUTO_INCREMENT=4  

Trying to save the first table fails when I put in that foreign key constraint. Can't figure out why. Both of the columns referenced in the constraint have the same type, size, etc:

INT(10) UNSIGNED NULL  

They only have a difference default value. One has a default value of NULL, the other is AUTO_INCREMENT. I didn't think that made a difference for foreign key constraints but I could be wrong.

Both tables are InnoDB and UFT8. What am I missing here?

UPDATED: My specific error:

/* SQL Error (1452): Cannot add or update a child row: a foreign key constraint fails (`db`.<result 2 when explaining filename '#sql-31c2_22ac1e1'>, CONSTRAINT `status_id_fk` FOREIGN KEY (`status_id`) REFERENCES `statuses` (`id`) ON DELETE SET NULL ON UPDATE CASCADE) */  

SSIS Row Count: Getting a null variable error where there is clearly a selected variable

Posted: 18 Apr 2013 02:02 PM PDT

Validation error. Build Files Count VIE [245]: The variable "(null)" specified by VariableName property is not a valid variable. Need a valid variable name to write to.

From what I can tell, this error is thrown when a variable is not assigned to the VariableName property; however, I definitely have a variable assigned, as seen in the image below:

"Count VIE" Row Count Properties

I've deleted the Row Count component and remade it, but the error continues to show up. Here is a snapshot of the Data Flow in question:

"Build Files" Data Flow

I'm not sure if its inclusion in a Conditional Split may be causing this error, but none of the other Row Count components seem to be throwing this error.

How do I copy my SQL Azure database to a local SQL Server instance?

Posted: 18 Apr 2013 12:10 PM PDT

I have an OLTP database hosted on a SQL Azure instance. I want to pull a copy of the database down from the cloud so I can run some heavy extracts and OLAP-style queries against it without impacting the source database.

How do I pull a copy of the database down to a local SQL Server instance?

No comments:

Post a Comment

Search This Blog