Wednesday 19 March 2014

Daily Activities of Oracle Database Administrator

   Daily Activities of Oracle Database Administrator 

  Daily Work of DBA:
1: Health check of the Database.
2: Viewing the Alert log file to analyze the ORA errors if any.
3: Tablespace Utilization.
4: Rebuilding of Indexes, if bulk load of data is inserted.
5: Viewing the temporary files, through the DBA_TEMP_FILES.
6: Database Growth Comparision.
7: User Management.
8: Backing up the archive log files.
9: Monitoring Backups.
10: Monitoring the log files, backups, database space usage and the use of system resources.
11: Exports/Imports
12: User Management
13: Monitoring Tablespace Segments
14: Monitoring Production Database Performance
15: Solving the ORA errors.
16: Take a COLD/RMAN backups at night time.



  Weekly Work of DBA:
1: Growth of the Database.
2: Total full backup of the database through hot backup.
3: Taking logical backups, in case of physical backups failure.
4: Taking weekly Tablespace backup.


  Monthly Work of DBA:
1: Index Rebuild.
2: Tablespace Reorganization.

    Quarterly Work of DBA:
1: Patching
2: Database Reorganization

ORA-01012: not logged on

ORA-01012: not logged on

ORA-01012: not logged on

This may occur generally due to heavy load or out of available connection in the database. If there is maximum number of sessions connected to the database, which is defined by ‘processes‘ parameter and database does not allow sysdba as well as other users to connect to the database. Sometimes it may also occur due to improper shutdown the database. In that case it shows connected but it does not allow to happening any query to the database instead it fails with ORA-01012: not logged on
Sometimes connecting with sysdba shows database in idle instance but whenever you issue startup it fails with ORA-01081: cannot start already-running ORACLE - shut it down first.
In the above case:
Either shutdown the Application as well as DB server (if Possible) or Kill unused oracle processes which holding the resource (Make sure do not kill BG processes. You can use toad for killing the process). Then login with sysdba to increase the value of parameter ‘processes’.
SQL> show parameter sessions
SQL> show parameter processes
SQL> show parameter transactions
SQL> SELECT value FROM V$PARAMETER
     WHERE NAME = 'processes';
SQL> Alter system set processes=300 scope=both sid='*';
SQL> Alter system set sessions=355 scope=both sid='*';
SQL> Alter system set transactions=410 scope=both sid='*';

DBA Daily/Weekly/Monthly or Quarterly Checklist

DBA Daily/Weekly/Monthly or Quarterly Checklist



------------------------------------------------------------------------------------------------------------------------
Daily Checks:
Verify all database, instances, Listener are up, every 30 Min.
Verify the status of daily scheduled jobs/daily backups in the morning very first hour.
Verify the success of archive log backups, based on the backup interval.
Check the space usage of the archive log file system for both primary and standby DB.
Check the space usage and verify all the tablespace usage is below critical level once in a day.
Verify Rollback segments.
Check the database performance, periodic basis usually in the morning very first hour after the night shift schedule backup has been completed.
Check the sync between the primary database and standby database, every 20 min. 
Make a habit to check out the new alert.log entry hourly specially if getting any error.
Check the system performance, periodic basis.
Check for the invalid objects
Check out the audit files for any suspicious activities.
Identify bad growth projections.
Clear the trace files in the udump and bdump directory as per the policy.
Verify all the monitoring agent, including OEM agent and third party monitoring agents.
Make a habit to read DBA Manual.
Weekly Checks:
Perform level 0 or cold backup as per the backup policy. Note the backup policy can be changed as per the requirement. Don’t forget to check out the space on disk or tape before performing level 0 or cold backup.
Perform Export backups of important tables.
Check the database statistics collection. On some databases this needs to be done every day depending upon the requirement.
Approve or plan any scheduled changes for the week.
Verify the schedule jobs and clear the output directory. You can also automate it.
Look for the object that break rule.
Look for security policy violation.      
Archive the alert logs (if possible) to reference the similar kind of error in future.
Visit the home page of key vendors.
Monthly or Quarterly Checks:
Verify the accuracy of backups by creating test databases.
Checks for the critical patch updates from oracle make sure that your systems are in compliance with CPU patches.
Checkout the harmful growth rate.
Review Fragmentation.
Look for I/O Contention.
Perform Tuning and Database Maintenance.
Verify the accuracy of the DR mechanism by performing a database switch over test. This can be done once in six months based on the business requirements.
-------------------------------------------------------------------------------------------------------------------------------------------------------
Below is the brief description about some of the important concept including important SQL scripts. You can find more scripts on my different post by using blog search option.
Verify all instances are up:
Make sure the database is available. Log into each instance and run daily reports or test scripts. You can also automate this procedure but it is better do it manually. Optional implementation: use Oracle Enterprise Manager's 'probe' event.
Verify DBSNMP is running:
Log on to each managed machine to check for the 'dbsnmp' process. For Unix: at the command line, type ps –ef | grep dbsnmp. There should be two dbsnmp processes running. If not, restart DBSNMP.
Verify success of Daily Scheduled Job:
Each morning one of your prime tasks is to check backup log, backup drive where your actual backup is stored to verify the night backup.
Verify success of database archiving to tape or disk:
In the next subsequent work check the location where daily archiving stored. Verify the archive backup on disk or tape.
Verify enough resources for acceptable performance:
For each instance, verify that enough free space exists in each tablespace to handle the day’s expected growth. As of <date>, the minimum free space for <repeat for each tablespace>: [ < tablespace > is < amount > ]. When incoming data is stable, and average daily growth can be calculated, then the minimum free space should be at least <time to order, get, and install more disks> days’ data growth. Go to each instance, run query to check free mb in tablespaces/datafiles. Compare to the minimum free MB for that tablespace. Note any low-space conditions and correct it.
Verify rollback segment:
Status should be ONLINE, not OFFLINE or FULL, except in some cases you may have a special rollback segment for large batch jobs whose normal status is OFFLINE. Optional: each database may have a list of rollback segment names and their expected statuses.For current status of each ONLINE or FULL rollback segment (by ID not by name), query on V$ROLLSTAT. For storage parameters and names of ALL rollback segment, query on DBA_ROLLBACK_SEGS. That view’s STATUS field is less accurate than V$ROLLSTAT, however, as it lacks the PENDING OFFLINE and FULL statuses, showing these as OFFLINE and ONLINE respectively.
Look for any new alert log entries:
Connect to each managed system. Use 'telnet' or comparable program. For each managed instance, go to the background dump destination, usually $ORACLE_BASE/<SID>/bdump. Make sure to look under each managed database's SID. At the prompt, use the Unix ‘tail’ command to see the alert_<SID>.log, or otherwise examine the most recent entries in the file. If any ORA-errors have appeared since the previous time you looked, note them in the Database Recovery Log and investigate each one. The recovery log is in <file>.
Identify bad growth projections.
Look for segments in the database that are running out of resources (e.g. extents) or growing at an excessive rate. The storage parameters of these segments may need to be adjusted. For example, if any object reached 200 as the number of current extents, upgrade the max_extents to unlimited. For that run query to gather daily sizing information, check current extents, current table sizing information, current index sizing information and find growth trends
Identify space-bound objects:
Space-bound objects’ next_extents are bigger than the largest extent that the tablespace can offer. Space-bound objects can harm database operation. If we get such object, first need to investigate the situation. Then we can use ALTER TABLESPACE <tablespace> COALESCE. Or add another datafile. Run spacebound.sql. If all is well, zero rows will be returned.
Processes to review contention for CPU, memory, network or disk resources:
To check CPU utilization, go to =>system metrics=>CPU utilization page. 400 is the maximum CPU utilization because there are 4 CPUs on phxdev and phxprd machine. We need to investigate if CPU utilization keeps above 350 for a while.
Make a habit to Read DBA Manual:
Nothing is more valuable in the long run than that the DBA be as widely experienced, and as widely read, as possible. Readingsshould include DBA manuals, trade journals, and possibly newsgroups or mailing lists.
Look for objects that break rules:
For each object-creation policy (naming convention, storage parameters, etc.) have an automated check to verify that the policy is being followed. Every object in a given tablespace should have the exact same size for NEXT_EXTENT, which should match the tablespace default for NEXT_EXTENT. As of 10/03/2012, default NEXT_EXTENT for DATAHI is 1 gig (1048576 bytes), DATALO is 500 mb (524288 bytes), and INDEXES is 256 mb (262144 bytes). To check settings for NEXT_EXTENT, run nextext.sql. To check existing extents, run existext.sql
All tables should have unique primary keys:
To check missing PK, run no_pk.sql. To check disabled PK, run disPK.sql. All primary key indexes should be unique. Run nonuPK.sql to check. All indexes should use INDEXES tablespace. Run mkrebuild_idx.sql. Schemas should look identical between environments, especially test and production. To check data type consistency, run datatype.sql. To check other object consistency, run obj_coord.sql.
Look for security policy violations:
Look in SQL*Net logs for errors, issues, Client side logs, Server side logs and Archive all Alert Logs to history
Visit home pages of key vendors:
For new update information made a habit to visit home pages of key vendors such as: Oracle Corporation:http://www.oracle.comhttp://technet.oracle.comhttp://www.oracle.com/supporthttp://www.oramag.com
Quest Software: http://www.quests.com
Sun Microsystems: http://www.sun.com
Look for Harmful Growth Rates:
Review changes in segment growth when compared to previous reports to identify segments with a harmful growth rate. 
Review Tuning Opportunities and Perform Tuning Maintainance:
Review common Oracle tuning points such as cache hit ratio, latch contention, and other points dealing with memory management. Compare with past reports to identify harmful trends or determine impact of recent tuning adjustments. Make the adjustments necessary to avoid contention for system resources. This may include scheduled down time or request for additional resources.
Look for I/O Contention:
Review database file activity. Compare to past output to identify trends that could lead to possible contention.
Review Fragmentation:
Investigate fragmentation (e.g. row chaining, etc.), Project Performance into the Future
Compare reports on CPU, memory, network, and disk utilization from both Oracle and the operating system to identify trends that could lead to contention for any one of these resources in the near future. Compare performance trends to Service Level Agreement to see when the system will go out of bounds.
--------------------------------------------------------------------------------------------
Useful Scripts:
--------------------------------------------------------------------------------------------
Script: To check free, pct_free, and allocated space within a tablespace
SELECT tablespace_name, largest_free_chunk, nr_free_chunks, sum_alloc_blocks, sum_free_blocks
, to_char(100*sum_free_blocks/sum_alloc_blocks, '09.99') || '%' AS pct_free
FROM ( SELECT tablespace_name, sum(blocks) AS sum_alloc_blocks
FROM dba_data_files
GROUP BY tablespace_name),
( SELECT tablespace_name AS fs_ts_name, max(blocks) AS largest_free_chunk
, count(blocks) AS nr_free_chunks, sum(blocks) AS sum_free_blocks
FROM dba_free_space
GROUP BY tablespace_name )
WHERE tablespace_name = fs_ts_name;
Script: To analyze tables and indexes
BEGIN
dbms_utility.analyze_schema ( '&OWNER', 'ESTIMATE', NULL, 5 ) ;
END ;
Script: To find out any object reaching <threshold>
SELECT e.owner, e.segment_type , e.segment_name , count(*) as nr_extents , s.max_extents
, to_char ( sum ( e.bytes ) / ( 1024 * 1024 ) , '999,999.90') as MB
FROM dba_extents e , dba_segments s
WHERE e.segment_name = s.segment_name
GROUP BY e.owner, e.segment_type , e.segment_name , s.max_extents
HAVING count(*) > &THRESHOLD
OR ( ( s.max_extents - count(*) ) < &&THRESHOLD )
ORDER BY count(*) desc;
The above query will find out any object reaching <threshold> level extents, and then you have to manually upgrade it to allow unlimited max_extents (thus only objects we expect to be big are allowed to become big.
Script: To identify space-bound objects. If all is well, no rows are returned.
SELECT a.table_name, a.next_extent, a.tablespace_name
FROM all_tables a,( SELECT tablespace_name, max(bytes) as big_chunk
FROM dba_free_space
GROUP BY tablespace_name ) f
WHERE f.tablespace_name = a.tablespace_name AND a.next_extent > f.big_chunk;
Run the above query to find the space bound object . If all is well no rows are returned  if found something then look at the value of next extent. Check to find out what happened  then use coalesce (alter tablespace <foo> coalesce;). and finally, add another datafile to the tablespace if needed.
Script: To find tables that don't match the tablespace default for NEXT extent.
SELECT segment_name, segment_type, ds.next_extent as Actual_Next
, dt.tablespace_name, dt.next_extent as Default_Next
FROM dba_tablespaces dt, dba_segments ds
WHERE dt.tablespace_name = ds.tablespace_name
AND dt.next_extent !=ds.next_extent AND ds.owner = UPPER ( '&OWNER' )
ORDER BY tablespace_name, segment_type, segment_name;
Script: To check existing extents
SELECT segment_name, segment_type, count(*) as nr_exts
, sum ( DECODE ( dx.bytes,dt.next_extent,0,1) ) as nr_illsized_exts
, dt.tablespace_name, dt.next_extent as dflt_ext_size
FROM dba_tablespaces dt, dba_extents dx
WHERE dt.tablespace_name = dx.tablespace_name
AND dx.owner = '&OWNER'
GROUP BY segment_name, segment_type, dt.tablespace_name, dt.next_extent;
The above query will find how many of each object's extents differ in size from the tablespace's default size. If it shows a lot of different sized extents, your free space is likely to become fragmented. If so, need to reorganize this tablespace.
Script: To find tables without PK constraint
SELECT table_name FROM all_tables
WHERE owner = '&OWNER'
MINUS
SELECT table_name FROM all_constraints
WHERE owner = '&&OWNER' AND constraint_type = 'P';
Script: To find out which primary keys are disabled
SELECT owner, constraint_name, table_name, status
FROM all_constraints
WHERE owner = '&OWNER' AND status = 'DISABLED' AND constraint_type = 'P';
Script: To find tables with nonunique PK indexes.
SELECT index_name, table_name, uniqueness
FROM all_indexes
WHERE index_name like '&PKNAME%'
AND owner = '&OWNER' AND uniqueness = 'NONUNIQUE'
SELECT c.constraint_name, i.tablespace_name, i.uniqueness
FROM all_constraints c , all_indexes i
WHERE c.owner = UPPER ( '&OWNER' ) AND i.uniqueness = 'NONUNIQUE'
AND c.constraint_type = 'P' AND i.index_name = c.constraint_name;
Script: To check datatype consistency between two environments
SELECT table_name, column_name, data_type, data_length,data_precision,data_scale,nullable
FROM all_tab_columns -- first environment
WHERE owner = '&OWNER'
MINUS
SELECT table_name,column_name,data_type,data_length,data_precision,data_scale,nullable
FROM all_tab_columns@&my_db_link -- second environment
WHERE owner = '&OWNER2'
order by table_name, column_name;
Script: To find out any difference in objects between two instances
SELECT object_name, object_type
FROM user_objects
MINUS
SELECT object_name, object_type
FROM user_objects@&my_db_link;

The Seven Deadly Habits of a DBA ... and how to cure them

The Seven Deadly Habits of a DBA
... and how to cure them

by Pavan
Calling widespread bad habits in database administration "deadly" may seem extreme. However, when you consider the critical nature of most data, and just how damaging data loss or corruption can be to a corporation, "deadly" seems pretty dead-on.
Although these habits are distressingly common among DBAs, they are curable with some shrewd management intervention. What follows is a list of the seven habits we consider the deadliest, along with some ideas on how to eliminate them.
Habit #1. THE LEAP OF FAITH: "We have faith in our backup."
Blind faith can be endearing, but not when it comes backing up a database. Backups should be trusted only as far as they have been tested and verified.
Cures:
  • Have your DBAs verify that the backup is succeeding regularly, preferably using a script that notifies them if there's an issue.
  • Maintain a backup to your backup. DBAs should always use at least two backup methods. A common technique is to use those old-fashioned exports as a backup to the online backups.
  • Resource test recoveries as often as is practical. An early sign that your DBA team is either overworked or not prioritizing correctly is having a quarter go by without a test recovery. Test recoveries confirm that your backup strategy is on track, while allowing your team to practice recovery activities so they can handle them effectively when the time comes.
Habit #2. GREAT EXPECTATIONS: "It will work the way we expect it to. Let's go ahead."
Although not user friendly in the traditional sense, Oracle is very power-user friendly— once you've been working with it for a while, you develop an instinct for the way things "should" work. Although that instinct is often right, one of the most dangerous habits any DBA can possess is an assumption that Oracle will "just work" the way it should.
Cures:
  • Inculcate a "practice, practice, practice" mentality throughout the organization. DBAs need to rehearse activities in the safe sandbox of a test environment that's designed to closely mimic the behaviour of the production system. The organization needs to allow the time and money for them to do so.
  • Pair inexperienced DBAs with senior ones whenever possible—or take them under your own wing. New DBAs tend to be fearless, but learning from someone else's experience can help instill some much needed paranoia.
  • Review the plans for everything. It's amazing how often DBAs say, "I've done that a hundred times, I don't need a plan." If they're heading into execution mode, they absolutely need a plan.
Habit #3. LAISSEZ-FAIRE ADMINISTRATION: "We don't need to monitor the system. The users always let us know when something's wrong."
If you depend on the users to inform the DBA team that there's a problem, it may already be too late.
Cures:
  • Install availability and performance monitoring systems so that issues are identified and resolved before they cause service-affecting failures.
  • Avoid post-release software issues by working with developers and testers to ensure that all production-ready software is stable and high-performance.
Habit #4. THE MEMORY TEST: "We'll remember how this happened, and what we did to get things going again."
It may seem impossible that a DBA team would forget a massive procedure that took them weeks to get right, and yet it happens all the time. In order to prevent recurring mistakes and take advantage of gained experience, documentation is essential.
Cures:
  • Require that your DBAs maintain a comprehensive documentation library and activity diary, including a significant level of rationale, syntax, and workflow detail.
  • Provide your team with groupware on your intranet so that these documents become searchable in an emergency.
  • Enforce the discipline of documentation and check it periodically. Ask your DBAs: When was this tablespace created, by whom, and with what SQL? What tasks were performed on a particular day? If they can't answer quickly, you'll know they've gone back to relying on memory.
Habit #5. THE BLAME GAME: "Don't look at me, it's the developer's fault that SQL is in production"
Some DBAs have a real "us versus them" mentality when it comes to developers in their organization. They see themselves not as facilitators helping the developers develop quality code from a database standpoint, but rather as guardians who prevent poor-quality code from making it into production. This might seem like semantics, but a confrontational relationship between developers and DBAs results in a lack of developer initiative and significant slowdowns in release cycles.
Cures:
  • Select DBAs who understand it's their responsibility to work as an integrated team with the developers they support.
  • Cultivate a team attitude by structuring continuous DBA involvement in every project rather than at review milestones.
  • Consider assigning an individual DBA in a developer support role. If it's clearly in the job description, there's more motivation to do it well.
Habit #6. THE SOLO ACT: "I know what I'm doing and don't need any help."
Database administration is increasingly complex and even the most senior DBAs can't possibly know every last detail. DBAs have different specialties, which need to be culled and utilized. When DBAs feel like they know, or should know, everything, they don't ask questions and miss out on valuable knowledge they could be gaining from others.
Cures:
  • Foster a teamwork culture where it's acceptable for DBAs to admit they don't know the answer and to ask for help.
  • Encourage your DBAs to seek out an outside peer group as a forum for brainstorming and testing their assumptions. No single person can match the expertise and experience of even a relatively small group.
  • Provide a safety net of tech resources such as reference materials, courses, and outside experts or consultants on call.
Habit #7. TECHNO-LUST: "Things would work so much better if only we had..."
DBAs are often on top of the latest technology, which can help them do a superlative job. But when the desire for new technology causes DBAs to recommend unnecessary hardware purchases or software add-ons, costs tend to skyrocket quickly—as do problems.
Cures:
  • Never upgrade your hardware infrastructure without first exhausting all tuning opportunities. Remember, ten years ago enormous enterprises were run on servers one-tenth the capacity—all thanks to necessity and skill.
  • Never consent to using advanced or new features until you're well aware of the ongoing maintenance commitment and resulting costs.
  • Watch out for DBA support software that presents friendly GUI interfaces for difficult tasks. This type of interface allows a beginner DBA to act as an intermediate DBA under certain circumstances, but simultaneously prevents that beginner from learning the actual skills behind the tasks. Moreover, these tools tend to hide real risks from the DBA, making potentially damaging activities as easy as point-and-click.
Whether it takes a twelve-step program or one tiny adjustment, all of these deadly DBA habits can be kicked. Of course, the first step is recognizing the problem. By starting with this list and doing a careful inventory of the successes and failures in your team's database administration, you'll be well on your way to finding a cure.

Tuesday 18 March 2014

Tablespace Management FAQs

Tablespace Management FAQs



1) What is Block or Database Block?
Storage area for storing rows
2) What is the unit of Block?
Bytes
3) What is the default Standard Block Size in Oracle 10g?
8KB
3) What is Non Standard Block Size?
Other than 8KB
4) How many Non-Standard Block Sizes are supported by Oracle? What are they?
Four: 2KB, 4KB, 16KB, 32KB
5) What is Extent?
A Collection of Blocks
6) What is Segment?
A Collection of Extents
7) What are the types of Segments?
Table, Index, Undo & Temporary
8) What is Table Segment?
It stores actual committed data
9) What is Index Segment?
It stores Indexes information which are created on tables
10) What is Undo Segment?
It stores pre-image value
11) What are the advantages of Undo Segment?
Provides Read Consistency, to rollback uncommitted transactions during (ROLL BACKWARD) by SMON
12) What is Temporary Segment?
Performing Sorting, Joins, Having, group by
13) What is Tablespace?
It is Logical collection of Segments & Physically related with data file
14) What are the types of Tablespaces?
Permanent, Temporary
15) What is Logical Structure?
Tablespace
16) What is the hierarchy of the Logical Structures?
Tablespace Contains Segments Contains Extents Contains Blocks
17) What are the Mandatory Tablespaces during Database Creation?
SYSTEM & SYSAUX
18) What are the Mandatory Tablespaces during Database Usage?
TEMPORARY, UNDO & Application (User) Tablespaces
19) What is SYSTEM Tablespace?
It contains Metadata Information (Data Dictionary)
20) What is SYSAUX Tablespace?
It contains Database Workload Information
21) When SYSAUX Tablespace is introduced?
Oracle 10g
21) What is Undo Tablespace?
It stores Pre-Image value
22) What is Permanent Tablespace?
It stores the data permanently
23) What are the types of Permanent Tablespace?
SYSTEM, SYSAUX, User defined tablespaces
24) What is Temporary Tablespace?
It stores the data temporarily during Sorting Operations, Join Operations
25) What is Temporary Tablespace Group?
It is Logical which contains more than one temporary tablespace at a time
26) What is Extent Management?
Allocating & Reallocating the Blocks
27) What are the types of Extent Management?
Locally Managed, Dictionary Managed
28) What type of Extent Management is preferred?
Locally Managed Tablespace (LMT)
29) What is Locally Managed Tablespace (LMT)?
In this the Extent Information will be stored in the form of Bitmaps in Data File Header
This Bitmap indicates whether free space is available in the block or not
30) What is Dictionary Managed Tablespace (DMT)?
In this the Extent Information will be stored in the Data Dictionary Tables
31) What is Recursive SQL?
Oracle writes internal queries to get the data from the Data Dictionary Tables
32) What are the drawbacks of Dictionary Managed Tablespace?
Results in the Contention of the Data Dictionary Tables
Size of the extents are managed automatically by the system
Changes to the extent bitmaps do not generate undo information
Avoid Recursive Space Management Operations
32) What is Segment Space Management?
Managing the space in the segments
33) What are the types of Segment Space Management?
Manual Segment Space Management, Automatic Segment Space Management
34) What type of Segment Space Management is preferred? Why?
Automatic, everything will be managed by Oracle
35) What are the drawbacks of Manual Segment Space Management?
We have to specify PCTFREE, PCTUSED
36) What is PCTFREE?
How much space is available for INSERTION?
37) What is PCTUSED?
How much space is available for UPDATION?
38) What is Auto Extend on with Tablespace?
When the tablespace is filled up it will allocate some space to tablespace
39) How to make a tablespace OFFLINE?
ALTER TABLESPACE <tablespace_name> OFFLINE;
40) What are the different options available when we make a tablespace OFFLINE?
NORMAL, TEMPORARY, TRANSACTIONAL, FOR RECOVERY
41) What is Tablespace OFFLINE NORMAL?
It performs Checkpoint
It can be made OFFLINE only when there are no error conditions in data files
42) What is Tablespace OFFLINE TEMPORARY?
It performs Checkpoint
A tablespace can be taken OFFLINE even if there are error conditions on data files
43) What is Tablespace OFFLINE IMMEDIATE?
It does not perform any Checkpoint
It requires Media Recovery before making tablespace ONLINE
44) Is it possible to make a Tablespace OFFLINE when the database is running in NOARCHIVELOG mode?
No
45) What is Tablespace OFFLINE FOR RECOVER?
It is deprecated & used for backward compatibility
46) What is Physical Structure?
Data File
47) What does a Tablespace is Logically Contains?
Segments, Extents & Blocks
48) What does a Tablespace Physically Contains?
Data Files
49) What are the types of Tablespaces with respect to the data files?
Small File Tablespace & Big File Tablespace
50) What is the small file tablespace?
Default Permanent tablespace
51) How many data files can exist in small file tablespace?
65535
52) What is big file tablespace?
It is a single data file which can store data up to 128TB
53) How many data files can exist in big file tablespace?
Single
54) What are the advantages of Big File Tablespace?
One Tablespace has only one data file
No need to constantly add data files to Tablespace
Data File Management in Large Databases is simplified
Storage Capacity is Increases
55) What is LOGGING with respect to the tablespace?
Changes are tracked in Online Redo Log Files
56) What is NOLOGGING with respect to the tablespace?
Changes are not tracked in Online Redo Log Files
57) What is UNIFORM with respect to the tablespace?
Extents are allocated at a uniform rate which is specified by the user
58) What is AUTO ALLOCATE with respect to the tablespace?
Extents are allocated automatically by the oracle
59) What is the default option for allocating the extents that is UNIFORM or AUTO ALLOCATE?
Auto Allocated
60) Is it possible to Drop/Rename SYSTEM/SYSAUX Tablespace?
No
61) What is Default Permanent Tablespace?
It is allocated to a user who has been not allocated to any permanent application tablespace
62) What is Default Temporary Tablespace?
It is allocated to a user who has been not allocated to any temporary tablespace
63) How to move a table from one tablespace to another tablespace?
ALTER TABLE <table_name> MOVE TABLESPACE <tablespace_name>;
64) How to move an index from one tablespace to another tablespace?
ALTER INDEX <index_name> REBUILD TABLESPACE <tablespace_name>;
65) What are storage parameters in Tablespace (DMT)?
Free Lists, PCT FREE, PCT USED
66) What are free lists?
Space available for INSERT
67) What is PCTFREE?
Space available for UPDATE
68) What is PCTUSED?
Threshold value for how much space used
69) How to DE allocate the unused Extents?
ALTER TABLE <table_name> DEALLOCATED UNSED;
70) Create a tablespace of BLOCKSIZE 16K?
CREATE TABLESPACE <tablespace_name> DATAFILE <path> BLOCKSIZE 16K;
71) What is DROP TABLESPACE <tablespace_name>?
It will drop only the tablespace logically
72) What is DROP TABLESPACE <tablespace_name> INCLUDING CONTENTS?
It will drop the tablespace along with the contents (Segments, Extents & Blocks)
73) What is DROP TABLESPACE <tablespace_name> INCLUDING CONTENTS AND DATAFILES?
It will drop the tablespace along with the contents & Physical Data Files
74) How to rename a tablespace?
ALTER TABLESPACE <tablespace_name> RENAME TO <new_tablespace_name>;
75) How to make a tablespace READ ONLY?
ALTER TABLESPACE <tablespace_name> READ ONLY;
76) How to make a tablespace READ WRITE?
ALTER TABLESPACE <tablespace_name> READ WRITE;
77) How to make a tablespace ONLINE?
ALTER TABLESPACE <tablespace_name> ONLINE;
78) How to make a tablespace OFFLINE?
ALTER TABLESPACE <tablespace_name> OFFLINE;
79) What is the Data Dictionary views which gives the information about Tablespaces?
DBA_TABLESPACES, V$TABLESPACE
83) What is the Data Dictionary Views which gives the information about the Segments?
DBA_SEGMENTS
84) What is the Data Dictionary Views which gives the information about the Extents?
DBA_EXTENTS
85) What is the Data Dictionary Views which gives the information about the data files?
DBA_DATA_FILES, V$DATFILE
86) What is the Data Dictionary Views which gives the information about the Temporary files?
DBA_TEMP_FILES
87) What is the Data Dictionary Views which gives the information about the Temporary Tablespace
Groups?
DBA_TABLESPACE_GROUPS

How to kill, cancel, restart, stop data pump jobs on window

Datapump jobs (expdp, impdp) can be stopped, killed or resumed from the database level. Killing or stopping a datapump job from Oslevel does not kill or stop it, we have to do it from database . PFB :-

Exporting a Full database backup :-

C:\Users\NewAdmin>expdp system/orcl directory=datapump dumpfile=expdp_full_29062013.dmp logfile=expdp_full_29062013.log compression=all full=y &

Export: Release 11.2.0.1.0 - Production on Sat Jun 29 19:46:41 2013

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_FULL_01":  system/******** directory=datapump dumpfile=expdp_full_29062013.dmp logfile=expdp_full_29062013.log compression=all full=y
Estimate in progress using BLOCKS method...
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA

Stop a datapump job :-

Datapump jobs can be stopped in two ways, either press CTRL + C you will get EXPDP prompt or attach to a already running job and after that write STOP_JOB

If datapump job is not running in background then once you press CTRL+C you will get EXPDP prompt . PFB :-

Export> STOP_JOB
Are you sure you wish to stop this job ([yes]/no): y

Datapump job is stopped.

if datapump is running in background then you have to attach a datapump job, find out the datapump job name by using below query :-

SQL> SELECT owner_name, job_name, operation, job_mode, state FROM dba_datapump_jobs;
OWNER_NAME                     JOB_NAME                       OPERATION                           JOB_MODE                  STATE
------------------------------ ------------------------------ ----------------------------------- ------------------------- -----------------------------
SYSTEM                         SYS_EXPORT_FULL_02             EXPORT                              FULL                      EXECUTING

Now attach to a running job:-

C:\Users\NewAdmin>expdp system/orcl attach=SYS_EXPORT_FULL_02

Export: Release 11.2.0.1.0 - Production on Sat Jun 29 20:01:45 2013

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Job: SYS_EXPORT_FULL_02
  Owner: SYSTEM
  Operation: EXPORT
  Creator Privs: TRUE
  GUID: 42902560C22D46BDA6924724126F59FD
  Start Time: Saturday, 29 June, 2013 20:01:08
  Mode: FULL
  Instance: orcl
  Max Parallelism: 1
  EXPORT Job Parameters:
  Parameter Name      Parameter Value:
     CLIENT_COMMAND        system/******** directory=datapump dumpfile=expdp_full_29062013_1.dmp logfile=expdp_full_29062013_1.log compression=all full=y
     COMPRESSION           ALL
  State: EXECUTING
  Bytes Processed: 0
  Current Parallelism: 1
  Job Error Count: 0
  Dump File: C:\DATAPUMP\EXPDP_FULL_29062013_1.DMP
    bytes written: 4,096

Worker 1 Status:
  Process Name: DW00
  State: EXECUTING

Export> STOP_JOB
Are you sure you wish to stop this job ([yes]/no): y

you will get a message like this :-

Job "SYSTEM"."SYS_EXPORT_FULL_02" stopped by user request at 20:02:01

SQL> SELECT owner_name, job_name, operation, job_mode, state FROM dba_datapump_jobs;
OWNER_NAME                     JOB_NAME                       OPERATION                           JOB_MODE                  STATE
------------------------------ ------------------------------ ----------------------------------- ------------------------- ------------------------------
SYSTEM                         SYS_EXPORT_FULL_02             EXPORT                              FULL                      NOT RUNNING

Restart a Stopped datapump Job :-

Attach to a stopped datapump job :-

C:\Users\NewAdmin>expdp system/orcl attach=SYS_EXPORT_FULL_02

Export: Release 11.2.0.1.0 - Production on Sat Jun 29 20:08:17 2013

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Job: SYS_EXPORT_FULL_02
  Owner: SYSTEM
  Operation: EXPORT
  Creator Privs: TRUE
  GUID: 42902560C22D46BDA6924724126F59FD
  Start Time: Saturday, 29 June, 2013 20:08:20
  Mode: FULL
  Instance: orcl
  Max Parallelism: 1
  EXPORT Job Parameters:
  Parameter Name      Parameter Value:
     CLIENT_COMMAND        system/******** directory=datapump dumpfile=expdp_full_29062013_1.dmp logfile=expdp_full_29062013_1.log compression=all full=y
     COMPRESSION           ALL
  State: IDLING
  Bytes Processed: 0
  Current Parallelism: 1
  Job Error Count: 0
  Dump File: C:\datapump\expdp_full_29062013_1.dmp
    bytes written: 4,096

Worker 1 Status:
  Process Name: DW00
  State: UNDEFINED

Export> START_JOB

Export> CONTINUE_CLIENT
Job SYS_EXPORT_FULL_02 has been reopened at Saturday, 29 June, 2013 20:08
Restarting "SYSTEM"."SYS_EXPORT_FULL_02":  system/******** directory=datapump dumpfile=expdp_full_29062013_1.dmp logfile=expdp_full_29062013_1.log compression=all full=y
Processing object type DATABASE_EXPORT/TABLESPACE

Job is again restarted.

Kill a datapump job :-

Attach to a running datapump job :-

C:\Users\NewAdmin>expdp system/orcl attach=SYS_EXPORT_FULL_01

Export: Release 11.2.0.1.0 - Production on Sat Jun 29 20:12:55 2013

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Job: SYS_EXPORT_FULL_01
  Owner: SYSTEM
  Operation: EXPORT
  Creator Privs: TRUE
  GUID: 452263D9B00047A9BCD6E97DE83878F4
  Start Time: Saturday, 29 June, 2013 20:12:57
  Mode: FULL
  Instance: orcl
  Max Parallelism: 1
  EXPORT Job Parameters:
  Parameter Name      Parameter Value:
     CLIENT_COMMAND        system/******** directory=datapump dumpfile=expdp_full_29062013.dmp logfile=expdp_full_29062013.log compression=all full=y
     COMPRESSION           ALL
  State: IDLING
  Bytes Processed: 0
  Current Parallelism: 1
  Job Error Count: 0
  Dump File: C:\datapump\expdp_full_29062013.dmp
    bytes written: 5,570,560

Worker 1 Status:
  Process Name: DW00
  State: RUNNING

Export> KILL_JOB
Are you sure you wish to stop this job ([yes]/no): y

DIfference between KILLING and STOPPING a datapump job is that, once you killed a datapump job you cannot able to restart it . But if we stop a datapump job we can restart it. Killing a datapump job will delete the dump and log files also.

I hope this article helped you.