Wednesday, December 25
Shadow

10 Common Mistakes That Businesses Make In Oracle Systems

Oracle databases need to perform well for the growth and development of a business. However, most companies often commit some general mistakes when it comes to the Oracle database. Experts in Oracle database consulting services state that companies should follow a process known as the Oracle performance methodology to avert these mistakes and re-engineer the applications to enable optimal performance.

What are the common mistakes that businesses make when it comes to the performance of an Oracle database?

The Oracle database needs to perform optimally for any business to succeed. Consultants of Oracle databases state the following are the list of 10 common mistakes that companies make when it comes to Oracle databases-

1.Management of bad connection

When there is the presence of stateless middleware in the application servers, problems arise in its connection and disconnection. The presence of stateless middleware affects the performance to a vast extent and is completely unscalable.

2.Shared pool and bad usage of cursors

If you do not use cursors, this results in parses being repeated. If the bind variables are not utilized, hard parsing of the SQL statements occur. The business should use cursors that open it and use it several times. The business should be cautious of applications that generate dynamic SQL.

3.Bad SQL

SQL that deploys extra resources over the appropriate number required for the application is labeled bad SQL. For instance, any decision support system query that operates for over 24 hours or a query from online applications that take over a minute is bad SQL. One should investigate SQL that consumes system resources for the improvement of performance.

The ADDM recognizes the high load of SQL, and the tuning advisor provides the appropriate recommendations for its improvement.

4.The utilization of parameters related to non-standard initialization

This generally takes place when there is the presence of incorrect assumptions or poor advice. Most Oracle databases offer acceptable performance by using basic parameters. For instance, those parameters related to SPIN-COUNT on the undocumented optimizer and latch features cause several problems that need a lot of investigation. Similarly, optimizer parameters that are set in the original parameter file override the optimal execution plans that have been proven. It is for this reason, optimizer settings, schemas, and schema statistics should be under a group so that they are managed consistently.

5.Getting the input and output of the database incorrect

There are several sites that poorly layout their database on the available disks. There are other sites that specify the number of disks in the wrong way. This is primarily because they incorrectly configure disks by their space and not by input and output bandwidth.

6.Problems in the setup of the online redo log

There are several sites that operate with very few redo log files and very small files. These files result in the checkpoints of the system to continuously place high loads on its buffer cache as well as the input-output system. If there are a very smaller number of redo files in the system, the archive is not able to keep up with it. The database needs to wait for the archiver to catch up. Here, experts in Oracle DBA consulting say the database needs to be configured correctly for performance. The size of these redo file logs should be appropriate for boosting database performance.

7.Data blocks

The serialization of data blocks present in the buffer cache because of inadequate free lists, transaction slots, free list groups, and the deficiency of rollback segments cause issues in the performance of the database. This is an issue that businesses often face heavy applications. For instance, those applications where the block size is above 8K or in applications where the volume of active users are high. Again, less rollback segments often lead to problems in the system. Consultants in the field say businesses should deploy automatic segment space management or ASSM along with automatic undo management to resolve the issue.

8.Scans of the long table

If there is a scan of long full tables for interactive or a high-volume of online applications, this indicates the site has poor transaction design, the index that is missing, and poor optimization of SQL. These long table scans are naturally unscalable and are input and output intensive.

9.A high volume of recursive SQL by SYS

High volumes of recursive SQL incorporated by SYS indicates activities of space management like extent allocations that take place. The above is unscalable and has an impact on the response time of the user. Businesses should deploy tablespaces that are locally managed for reducing recursive SQL as a result of extent allocation. This recursive SQL incorporated by a different user ID is probably SQL or PL/SQL – this is not an issue.

10.Errors in migration and deployment

There are several cases where the system application uses several resources primarily because the scheme that owns tables have not been effectively migrated from the environment of development or a previous implementation. Instances of the above are wrong statistics and indexes that are missing.

These mistakes often lead to sub-optimal execution plans as well as a poor interactive performance on the part of the user. When the system migrates applications of any known performance, it should export schema statistics for maintaining the stability of the database package.

Consultants in the Oracle DBA database state that the above 10 mistakes are not directly identified by ADDM. The ADDM generally highlights the heavy load SQL in the system. So, businesses should ensure they are identified and arrested immediately for optimal performance. They say that when businesses are creating an Oracle database, they should ensure that its scalability needs are taken into consideration.

The objective here to achieve optimal scalability for performance. This is referred to as linear scalability, where the throughput of the system is in direct proportion to the number of CPUs needed for the system. Businesses should be aware of the factors that prevent the optimal levels of scalability and use the correct measures to rectify them.

Article was written by guest writer Karen.

Leave a Reply

Your email address will not be published. Required fields are marked *