Friday, August 2, 2013

[how to] How to improve TempDb on RAMDisk showing par performance

[how to] How to improve TempDb on RAMDisk showing par performance


How to improve TempDb on RAMDisk showing par performance

Posted: 02 Aug 2013 07:23 PM PDT

Given 2 SQL Server Instances where the second instance is configured with a RAMDisk for tempdb and the following test case.

-- Create source data  select top(1000000) * /* ~10 cols */ into #t1 from SomeData;  

Then measure total runtime for these cascading selects;

-- Benchmark    select * into #t4 from #t1;  select * into #t5 from #t4;  select * into #t6 from #t5;  

The runtimes for me came out the same ~(15s vs ~15s). One CPU maxes out for the entire test period.

Is there a way to speed those queries across CPUs (is that tempdb file partitioning)?

Fill factor based on index ranges

Posted: 02 Aug 2013 05:31 PM PDT

I'm designing a Postgres database for an events app. The app lists events sorted by when they start. Initially the app displays only 30 events. As users scroll through the list of events, more events are fetched from the database. In reduced form, the queries (depending on the direction in which the user is scrolling) are:

SELECT ?  FROM events   WHERE starts_at >= ?   ORDER BY starts_at   OFFSET ? LIMIT ?    SELECT ?   FROM events   WHERE starts_at < ?   ORDER BY starts_at DESC   OFFSET ? LIMIT ?  

I plan on clustering the table on starts_at with a fill factor of about 70%, and expect I'll want to run the cluster command periodically to maintain performance.

Almost all of the events that users add will have a starts_at value in the future. Therefore, although specifying a 70% fill factor for events which start in the future makes sense, using a 70% fill factor for events that started in the past seems like a waste of disk space.

Is there a way to have Postgres cluster the events table such that the fill factor for events where starts_at < CURRENT_TIMESTAMP is 99% while the fill factor for events where starts_at >= CURRENT_TIMESTAMP is 70%?

Need help for creating Church Database [on hold]

Posted: 02 Aug 2013 03:00 PM PDT

Please i am creating a database for a church tithe collection. The church collect tithe every Sunday. My problem is that i don't know if i have to create a table for each month in the year and continue to create tables for every year or the is a way out. Currently i have created a database with twelve tables in it for every month in the year but thinking of what to do subsequent years to come. Do i have to create more tables for every year? Am using MS SQL Server 2005 and Visual basic 2005 as the front end. I will be very grateful if an expect out there can help me. Thanks.

How to optimise T-SQL query using Execution Plan

Posted: 02 Aug 2013 02:30 PM PDT

I have a SQL query that I have spent the past two days trying to optimise using trial-and-error and the execution plan, but to no avail. Please forgive me for doing this but I will post the entire execution plan here. I have made the effort to make the table and column names in the query and execution plan generic both for brevity and to protect my company's IP. The execution plan can be opened with SQL Sentry Plan Explorer.

I have done a fair amount of T-SQL, but using execution plans to optimise my query is a new area for me and I have really tried to understand how to do it. So, if anyone could help me with this and explain how this execution plan can be deciphered to find ways in the query to optimise it, I would be eternally grateful. I have many more queries to optimise - I just need a springboard to help me with this first one.

This is the query:

DECLARE @Param0 DATETIME     = '2013-07-29';  DECLARE @Param1 INT          = CONVERT(INT, CONVERT(VARCHAR, @Param0, 112))  DECLARE @Param2 VARCHAR(50)  = 'ABC';  DECLARE @Param3 VARCHAR(100) = 'DEF';  DECLARE @Param4 VARCHAR(50)  = 'XYZ';  DECLARE @Param5 VARCHAR(100) = NULL;  DECLARE @Param6 VARCHAR(50)  = 'Text3';    SET NOCOUNT ON    DECLARE @MyTableVar TABLE  (      B_Var1_PK int,      Job_Var1 varchar(512),      Job_Var2 varchar(50)  )    INSERT INTO @MyTableVar (B_Var1_PK, Job_Var1, Job_Var2)   SELECT B_Var1_PK, Job_Var1, Job_Var2 FROM [fn_GetJobs] (@Param1, @Param2, @Param3, @Param4, @Param6);    CREATE TABLE #TempTable  (      TTVar1_PK INT PRIMARY KEY,      TTVar2_LK VARCHAR(100),      TTVar3_LK VARCHAR(50),      TTVar4_LK INT,      TTVar5 VARCHAR(20)  );    INSERT INTO #TempTable  SELECT DISTINCT      T.T1_PK,      T.T1_Var1_LK,      T.T1_Var2_LK,      MAX(T.T1_Var3_LK),      T.T1_Var4_LK  FROM      MyTable1 T      INNER JOIN feeds.MyTable2 A ON A.T2_Var1 = T.T1_Var4_LK      INNER JOIN @MyTableVar B ON B.Job_Var2 = A.T2_Var2 AND B.Job_Var1 = A.T2_Var3  GROUP BY T.T1_PK, T.T1_Var1_LK, T.T1_Var2_LK, T.T1_Var4_LK    -- This is the slow statement...  SELECT       CASE E.E_Var1_LK          WHEN 'Text1' THEN T.TTVar2_LK + '_' + F.F_Var1          WHEN 'Text2' THEN T.TTVar2_LK + '_' + F.F_Var2          WHEN 'Text3' THEN T.TTVar2_LK      END,      T.TTVar4_LK,      T.TTVar3_LK,      CASE E.E_Var1_LK          WHEN 'Text1' THEN F.F_Var1          WHEN 'Text2' THEN F.F_Var2          WHEN 'Text3' THEN T.TTVar5      END,      A.A_Var3_FK_LK,      C.C_Var1_PK,      SUM(CONVERT(DECIMAL(18,4), A.A_Var1) + CONVERT(DECIMAL(18,4), A.A_Var2))  FROM #TempTable T      INNER JOIN TableA (NOLOCK) A ON A.A_Var4_FK_LK  = T.TTVar1_PK      INNER JOIN @MyTableVar     B ON B.B_Var1_PK     = A.Job      INNER JOIN TableC (NOLOCK) C ON C.C_Var2_PK     = A.A_Var5_FK_LK      INNER JOIN TableD (NOLOCK) D ON D.D_Var1_PK     = A.A_Var6_FK_LK      INNER JOIN TableE (NOLOCK) E ON E.E_Var1_PK     = A.A_Var7_FK_LK        LEFT OUTER JOIN feeds.TableF (NOLOCK) F ON F.F_Var1 = T.TTVar5  WHERE A.A_Var8_FK_LK = @Param1  GROUP BY      CASE E.E_Var1_LK          WHEN 'Text1' THEN T.TTVar2_LK + '_' + F.F_Var1          WHEN 'Text2' THEN T.TTVar2_LK + '_' + F.F_Var2          WHEN 'Text3' THEN T.TTVar2_LK      END,      T.TTVar4_LK,      T.TTVar3_LK,      CASE E.E_Var1_LK           WHEN 'Text1' THEN F.F_Var1          WHEN 'Text2' THEN F.F_Var2          WHEN 'Text3' THEN T.TTVar5      END,      A.A_Var3_FK_LK,       C.C_Var1_PK      IF OBJECT_ID(N'tempdb..#TempTable') IS NOT NULL  BEGIN      DROP TABLE #TempTable  END  IF OBJECT_ID(N'tempdb..#TempTable') IS NOT NULL  BEGIN      DROP TABLE #TempTable  END  

What I have found is that the third statement (commented as being slow) is the part that is taking the most time. The two statements before return almost instantly.

The execution plan is available as XML at this link.

Better to right-click and save and then open in SQL Sentry Plan Explorer or some other viewing software rather than opening in your browser.

If you need any more information from me about the tables or data, please don't hesitate to ask.

Importing delimited files into SQL server

Posted: 02 Aug 2013 12:07 PM PDT

I am trying to import a large file into SQL server that is | delimited.

I know basically nothing about the data, I just want to get it imported.

When I go to Database->tasks->import I use the advanced option to suggest types, and provide padding.

The problem is that that routine does not go through the whole file, even when I specify an absurly large number of rows (1000000000), and so I am constantly getting tructation->error, change the types, restart the import etc errors.

Is there a better way to do this?

Note: The file is not on the same machine as SQL Server

How identify tables with millions of entries

Posted: 02 Aug 2013 03:20 PM PDT

On a debian server with apache and mysql, how can I find out if any one table gets spammed?

I have lots of different blogs, wordpress, wikis,... on my server of different customers.

It seems like some PHP aplications are not protected against spamming, so some tables get really big and slow down the whole server.

I would need a script, that monitors all tables. Or is there a simple tool I could install to get a report if something weird happens?

Import .bak file in Microsoft SQL Server 2008 Service Pack 3

Posted: 02 Aug 2013 12:39 PM PDT

Please some one help me...

I am using Microsoft SQL Server 2008 Service Pack 3. I have a backup of my database as a TestDatabase.bak file. I want to restore it. I have tried for it but after restore, it is showing restoring...as shown in image.

What should I do.? please help me..

enter image description here

Is there a standard formula to calculate the optimal resource required by SQL Server base on the Ram size of the server

Posted: 02 Aug 2013 12:40 PM PDT

My server that SQL Server runs on has 8GB of Ram. Is there a standard formula that DBA use to gauge the minimum and maximum resource to be allocated for SQL Server base on the server Ram size?

I need to know how much of MB is optimal for these settings:

  • Minimum Server Memory(MB)
  • Maximum Server Memory(MB) and
  • Minimum memory per query.

My research get me to this link:Guideline

But I think the best solution is to know how he get to those figures.

how to generate a range of number by text box [on hold]

Posted: 02 Aug 2013 07:58 PM PDT

i have 3 fields---MIN MAX and SN

i use a form to enter number into MIN and MAX for example, MIN is 10, MAX is 20 then SN list from 10 to 20, total 11 records in the table

what is the easiest way to do it, thx

here is my code

Private sub xx() Dim i As Integer

For i = [Forms]![MAIN]![MIN] To [Forms]![MAIN]![MAX] [SN] = i

Next i End Sub

i tried DoCmd.RunCommand acCmdSave and DoCmd.RunCommand acCmdSaveRecord before Next i

but both save the result in one field. what command can save each counting result??

What is a standard or conventional name for a column representing the display order of the rows? [on hold]

Posted: 02 Aug 2013 04:59 PM PDT

For example, a junction table associating a product and its pictures.

create table product_picture (    product_id bigint not null references product(id),    picture_id bigint not null references picture(id),    some_column int not null  );  

What is a common/conventional, short, general, name for "some_column" if it represents the display order of the photos?

"order", "sort", and "sequence" are out, as they are keywords.

Can I force a user to use WITH NOLOCK?

Posted: 02 Aug 2013 11:43 AM PDT

Can I force a user's queries to always run with the hint NOLOCK? e.g. they type

select * from customer  

But what is executed on the server is

select * from customer with (nolock)  

THIS QUESTION IS NOT:
About the various pros and cons of NOLOCK, respectfully. I know what they are, this is not the place to discuss them.

my.cnf validation

Posted: 02 Aug 2013 11:19 AM PDT

We have moved our server from old 8 GB RAM Server to new 16 GB RAM server so that we could have better performance.

The server is still consuming lot of MEMORY.

The tables in the database are not designed for InnoDB. The DB physical file size is approximately 2.8 GB.

my.cnf parameters are :

[client]  #password           = your_password  port                = 3306  socket              = /var/lib/mysql/mysql.sock    [mysqld]  port = 3306  socket = /var/lib/mysql/mysql.sock  skip-locking  #skip-bdb#niraj  skip-external-locking  key_buffer                  = 128M  max_length_for_sort_data    = 1024  max_tmp_tables              = 32M  table_cache                 = 64  max_allowed_packet          = 128M  sort_buffer_size            = 32M  read_buffer_size            = 10M  join_buffer_size            = 256M  read_rnd_buffer_size        = 64M   myisam_sort_buffer_size     = 256M  thread_cache_size           = 64  query_cache_size            = 256M  thread_concurrency          = 8  max_connect_errors          = 100  log-bin=mysql-bin  server-id                            = 1  set-variable = max_connections       = 10000  set-variable = connect_timeout       = 280  set-variable = interactive_timeout   = 280  set-variable = net_read_timeout      = 300  innodb_buffer_pool_size              = 3G  innodb_additional_mem_pool_size      = 32M  innodb_log_file_size                 = 768M  innodb_log_buffer_size               = 16M  #innodb_flush_log_at_trx_commit      = 1  innodb_lock_wait_timeout             = 50    [mysqldump]  quick  max_allowed_packet          = 64M    [mysql]  no-auto-rehash    [isamchk]  key_buffer              = 64M  sort_buffer_size        = 256k   read_buffer             = 256k  write_buffer            = 256k    [myisamchk]  key_buffer              = 64M  sort_buffer_size        = 256M  read_buffer             = 256k  write_buffer            = 256k    [mysqlhotcopy]  interactive-timeout  

Please any one can validate my.cnf and suggest why taking much memory.

Monthly backup of SQL server DB to PostgreSQL?

Posted: 02 Aug 2013 11:37 AM PDT

The company I'm working for has a SQL Server with read-only access. They use Crystal Reports hooked up to PostgreSQL for reporting. Is there any way to make it so I can move all the data from the MSSQL DB to PostgreSQL without user interaction? That seems to be the caveat to what I'm trying to do. They need to be able to run this report after I leave without having to interact with it during the process.

Or am I looking at this the wrong way? Is there a way to save a "snapshot" of the SQL Server DB that can be manipulated in Crystal Reports? The ultimate goal is that since the DB is dynamic we need to be able to have a static DB at the end of the month that all the reports can be ran on without having to worry about it changing.

How do I migrate varbinary data to Netezza?

Posted: 02 Aug 2013 01:19 PM PDT

I got a warning message while migrating DDL from SQL Server to Netezza:

Warning: [dbo].[spec_binarymessage].[blobdata] data type [varbinary] is not supported the target system and will be scripted as VARCHAR(16000).

I'm wondering whether this kind of data conversion will cause some issues such as truncation of data etc.?

How can I get my linked server working using Windows authentication?

Posted: 02 Aug 2013 04:05 PM PDT

I'm trying to get a linked server to ServerA created on another server, ServerB using "Be made using the login's current security context" in a domain environment. I read that I'd need to have SPNs created for the service accounts that run SQL Server on each of the servers in order to enable Kerberos. I've done that and both now show the authentication scheme to be Kerberos, however, I'm still facing the error:

"Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'".  

In Active Directory, I can see that the service account for ServerB is trusted for delegation to MSSQLSvc, but I noticed that the service account for ServerA does not yet have "trust this user for delegation" enabled. Does the target server also need to have that option enabled? Is anything else necessary to be able to use the current Windows login to use a linked server?

Connection to local SQL Server 2012 can be established from SSMS 2008 but not from SSMS 2012

Posted: 02 Aug 2013 03:49 PM PDT

I have two local SQL Server instances running on my local machine. The first is SQL Server 2008 R2 Enterprise Edition (named MSSQLSERVER) and the 2nd is SQL Server 2012 Business Intelligence Edition.

My problem is with SSMS 2012 which can connect to distant servers but not the local 2012 instance; I can however connect to this instance from SSMS 2008.

The error message I get when trying to login is

Login Failed. The login is from an untrusted domain and cannot be used with Windows Authentication. (Microsoft SQL Server, Error: 18452)

I must point out that I don't have the necessary privileges to access SQL Server Configuration Manager (blocked by group policy).

Any help would be appreciated.

Is there a way to find the least recently used tables in a schema?

Posted: 02 Aug 2013 05:49 PM PDT

Is there a way to find the least recently used tables in a MySQL schema? Besides going into data directories? I was hoping there was a metadata or status trick-- but Update_Time in STATUS and INFORMATION_SCHEMA is always NULL.

How to find Oracle home information on Unix?

Posted: 02 Aug 2013 08:49 PM PDT

Need help finding Oracle home path corresponding to a database instance in RAC environment. I am aware of few of the ways to achieve the same. Listing them below to avoid the same answers.

  1. /etc/oratab This file is not mandatory and hence may not have all instance information.

  2. Parsing contents of Listener.ora In RAC environment, the listener.ora can be located at non default location.

  3. use TNS_ADMIN to find Listener.ora location and parse the file.

  4. ORACLE_HOME env variable May not be set always.

  5. ps -ef | grep tns to get the home path from service name. Gives path for currently running listener

  6. select "SYSMAN"."MGMT$TARGET_COMPONENTS"."HOME_LOCATION"
    from "SYSMAN"."MGMT$TARGET_COMPONENTS"
    where "SYSMAN"."MGMT$TARGET_COMPONENTS"."TARGET_NAME" = <Database SID>

    The schema sysman can be dropped after first time login to oracle.

  7. SELECT NVL(SUBSTR(FILE_SPEC, 1, INSTR(FILE_SPEC, '\', -1, 2) -1) , SUBSTR(FILE_SPEC, 1, INSTR(FILE_SPEC, '/', -1, 2) -1)) FOLDER
    FROM DBA_LIBRARIES
    WHERE LIBRARY_NAME = 'DBMS_SUMADV_LIB';

    So if a DBA changes Oracle Home (and hence the location of libqsmashr.so) after installation of Oracle, the path retrieved from above query would be invalid.

  8. . oraenv Works only for 11g

I am trying to find out a generic way which will work for all Oracle versions and it should not be dependent on anything which is not useful to DBA.

Do you have any way other than listed above to do the same?

Many Thanks in advance.

How to handle "many columns" in OLAP RDBMS

Posted: 02 Aug 2013 12:49 PM PDT

I have a fact that has around 1K different numerical attributes (i.e. columns). I would like to store this in to a column-oriented DB and perform cube analysis on it.

I tried to design a star schema, but I'm not sure how to handle this many columns. Normalising it sounds wrong, but I can't just have flat columns either. The combination of attributes are also too diverse to have a simple dimension table for this, even if I'd reduce the numerical values into categories (ranges), which is an option. I thought about storing them as XML or JSON for each row, but that doesn't sound great either.

If it helps, I'm planning to use Amazon's redshift for the DB.

Note: We have strong preference for RedShift as it fits perfectly for at least other few operations we do on this data. Hence I want to avoid other technologies like HBase if possible.

Tuning advisor with Extended events?

Posted: 02 Aug 2013 03:30 PM PDT

With SQL traces I was able to analyze them with Database Engine Tuning Advisor to obtain basic recommendations for perf. tuning(missing indexes, statistics,...).

Now, with SQL 2012 and Extended Events, how can I do something similar?

Thanks

Mysql DB server hits 400% CPU

Posted: 02 Aug 2013 11:49 AM PDT

I have been facing problem with my database server quite a month, Below are the observations that I see when it hits the top.

 - load average 40 to 50   - CPU % - 400%    - idle % - 45%   - wait % - 11%   - vmstat procs r-> 14 and b-> 5   

And then drains down within 5 minutes. And when I check the show processlist I see queries for DML and SQL are halted for some minutes. And it processes very slowly. Whereas each query are indexed appropriately and there will be no delay most of the time it returns less than 1 second for any query that are being executed to server the application.

  • Mysql Version : 5.0.77
  • OS : CentOS 5.4
  • Mem: 16GB RAM (80% allocated to INNODB_BUFFER_POOL_SIZE)
  • Database Size: 450 GB
  • 16 Processor & 4 cores
  • Not in per-table model.
  • TPS ranges 50 to 200.
  • Master to a Slave of the same configuration and seconds behind is 0.

Below url shows show innodb status \G and show open tables; at the time spike. And this reduced within 5 minutes. Sometimes rare scenarios like once in two months I see the processes takes more than 5 to 8 hours to drain normal. All time I notice the load processor utilization and how it gradually splits its task and keep monitoring the process and innodb status and IO status. I need not do anything to bring it down. It servers the applications promptly and after some time it drains down to normal. Can you find anything suspicious in the url if any locks or OS waits any suggestion to initially triage with or what could have caused such spikes ?

http://tinyurl.com/bm5v4pl -> "show innodb status \G and show open tables at DB spikes."

Also there are some concerns that I would like to share with you.

  1. Recently I have seen a table that gets inserts only about 60 per second. It predominantly locks for a while waiting for auto-inc to get released. And thus subsequent inserts stays in the processlist tray. After a while the table gets IN_USE of about 30 threads and later I don't know what it makes to free them and clears the tray. (At this duration the load goes more than 15 for 5 minutes)

  2. Suppose if you say application functionality should be shapped to best suite the DB server to react. There are 3 to 5 functionalities each are independent entities in schema wise. Whenever I see the locks it gets affected to all other schemas too.

  3. Now what makes best fuzzy is the last one. I see slave keeps in synch with master with a delay of 0 second all time whereas slave has a single thread SQL operation that is passed from relay IO that which acts in FIFO model from the binary logs where Master had generated. When this single headed slave can keep the load less and have the operations upto-date, should the concurrent hits are really made to be concurrent for the functionalities which I assume making the possible IO locks in OS level. Can this be organized in application itself and keep the concurrent tenure density thinner?

Database stuck in restoring and snapshot unavailable

Posted: 02 Aug 2013 10:49 AM PDT

I tried to restore my database from a snapshot. This usually took around a minute to complete the last couple of times. When I did it today, it didn't complete for around 30 minutes and the spid was in a suspended state. I stopped the query and now my database is stuck in restoring state and my snapshot is unavailable. Am I screwed?

USE master;  RESTORE DATABASE QA from   DATABASE_SNAPSHOT = 'QA_Snap_Testing';  GO  

Multiple database servers for performance vs failover

Posted: 02 Aug 2013 06:49 PM PDT

If I have two database servers, and I am looking for maximum performance vs high-availability, what configuration would be best?

Assuming the architecture is two load-balanced web/app servers in front of two db servers, will I be able to have both db servers active with synced data, with web1 to db1, web2 to db2 setup? Is this active/active?

I'm also aware that the two db servers can have their own schema to manually 'split' the db needs of the app. In this case daily backups would be fine. We don't have 'mission critical data.'

If it matters, we have traffic around 3,000-7,000 simultaneous users.

Workspace Memory Internals

Posted: 02 Aug 2013 01:19 PM PDT

Per my reading books on SQL Server 2008 Internals and Troubleshooting (borrowed from local library in Illinois) by Christian Bolton, Brent Ozar etc. I am trying to seek understanding and confirmation on SQL server and lots of searching on the web I would appreciate if someone can confirm or correct my understanding.

Every query or operation that requires query memory grant will need work space memory. In general query using Sort, Hash Match Join, Parallelism (Not sure about this), Bulk Insert (not sure), Index Rebuild etc. will need query workspace memory..

Workspace Memory is part of SQL Server buffer pool (it is allocated as part of buffer pool) and maximum workspace memory is 75% of memory allocated to buffer pool. By default a single query can not get more than 25% of workspace memory (in SQL 2008/SQL 2012 -- controlled by Resource Governor default workload group out of the box).

Seeking a confirmation of my understanding

1) Considering system with 48 GB RAM and max server memory configured to 40 GB does this mean max workspace memory is limited to 30 GB and a single query can not get workspace memory (query memory) more than 10 GB. So if you have a bad query working with a billion rows that is doing massive hash join and need more than 10 GB of memory (workspace memory) would it even care to go through this memory grant queue or right away spill to the disk?

2) If a query doing a massive sort operation has been assign a workspace memory of 5 MB and during the query execution of the query if query optimizer realize that due to bad statistics or missing indexes this query will actually need 30 MB of workspace memory it will immediately spill to tempdb. Even if system has plenty of workspace memory available during the execution once the query exceeded the granted workspace memory during the execution it will has to spill to the disk. Does my understanding is correct?

Slow backup and extremely slow restores

Posted: 02 Aug 2013 01:49 PM PDT

I don't normally work with MySQL but with MS-SQL and am having issues restoring a dump backup of a 9 GB database. I converted it to MS-SQL and it takes a grand total of 4 minutes to restore but the MySQL DB takes over an hour on the same server. The MySQL database is using InnoDB but is there an alternative to speeding up the restores? Both databases are on the same machine, Windows 2008R2 in a VM with a dymanic SANs.

Correction - it takes MS-SQL 1 minute to restore, 1 hour to restore the same database in MySQL

EDIT: mysql.ini (with commented lines removed):

[client]  no-beep  port=3306  [mysql]  default-character-set=utf8  [mysqld]  port=3306  basedir="C:\Program Files\MySQL\MySQL Server 5.5\"  datadir="C:\ProgramData\MySQL\MySQL Server 5.5\data\"  character-set-server=utf8  default-storage-engine=INNODB  sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"  log-output=NONE  general-log=0  general_log_file="CM999-SV510.log"  slow-query-log=0  slow_query_log_file="CM999-SV510-slow.log"  long_query_time=10  log-error="CM999-SV510.err"  max_connections=100  query_cache_size=0  table_cache=256  tmp_table_size=22M  thread_cache_size=8  myisam_max_sort_file_size=100G  myisam_sort_buffer_size=43M  key_buffer_size=8M  read_buffer_size=64K  read_rnd_buffer_size=256K  sort_buffer_size=256K  innodb_additional_mem_pool_size=4M  innodb_flush_log_at_trx_commit=1  innodb_log_buffer_size=2M  innodb_buffer_pool_size=124M  innodb_log_file_size=63M  innodb_thread_concurrency=9  

No comments:

Post a Comment

Search This Blog