>

Oracle 10g for dummies pdf

Date published: 

     

Oracle Database Concepts, 10g Release 2 (). B Copyright © , , Oracle. All rights reserved. Primary Author: Michele Cyran. Contributing. Currently, Pete is performing the role of Oracle9i and Oracle 10g It will help beginners to become experts with an easy-to-follow format and numerous. Currently, Pete is performing the role of Oracle9i and Oracle 10g. Database You will learn how to do all these queries, and even build the tables to store the.

Author: SHAWNEE MOSBURG
Language: English, Spanish, Dutch
Country: Bulgaria
Genre: Lifestyle
Pages: 426
Published (Last): 12.06.2016
ISBN: 819-3-53810-752-6
PDF File Size: 9.58 MB
Distribution: Free* [*Regsitration Required]
Uploaded by: ONIE

35582 downloads 174021 Views 36.84MB ePub Size Report


Employ SQL functions to generate and retrieve customized data. • Run data manipulation language (DML) statements to update data in Oracle Database 10g . Dec 10, A wise man* once said, an expert is someone who uses big words and acronyms where simple phrases would do just as nicely. So stand back. Oracle Database 10g: Administration Workshop I Release 2. (Code: DGC30). 3. hours a day. PDF created with pdfFactory Pro trial version meteolille.info class students learn the concepts of relational databases and the.

IT Skills. Management Skills. Communication Skills. Business Skills. Digital Marketing Skills. Human Resources Skills. Health Care Skills.

Oracle 10g Tutorial. Job Recommendation Latest. Jobs in Meghalaya Jobs in Shillong. View All Locations. Making a great Resume: How to design your resume? Have you ever lie on your resume? Read This Tips for writing resume in slowdown What do employers look for in a resume? Interview Tips 5 ways to be authentic in an interview Tips to help you face your job interview Top 10 commonly asked BPO Interview questions 5 things you should never talk in any job interview Best job interview tips for job seekers 7 Tips to recruit the right candidates in 5 Important interview questions techies fumble most What are avoidable questions in an Interview?

Top 10 facts why you need a cover letter? Report Attrition rate dips in corporate India: Survey Most Productive year for Staffing: Study The impact of Demonetization across sectors Most important skills required to get hired How startups are innovating with interview formats Does chemistry workout in job interviews? Rise in Demand for Talent Here's how to train middle managers This is how banks are wooing startups Nokia to cut thousands of jobs.

Our Portals: Username Password. New to Wisdomjobs? Sign up. When eventually, you rise to leave, you exchange names and numbers and promise to stay in touch. They say their name is Ross Geller. You add it to your address book. But you already had a friend named Ross Geller! How will you know which is which when you want to phone them up and laugh about the monkey joke again? Databases rule the world, and thus, primary keys are all around us.

We now have all the pieces of the puzzle. We can now redefine — and understand — relational databases. A Relational Database is a database in which the data is organised in tables with the relationships being maintained between the different tables.

Our database has a table for names, another for phone numbers, and a third for addresses. However, there is no way of knowing which of our friends lives at what address and when, or what their phone number might be. Take a minute to study the tables. Notice how useful primary keys are? And so, armed with our burgeoning knowledge of databases, we can look at the following:.

And the reason we know that is because we now implicitly understand the concept of foreign keys. A Foreign Key is a column or combination of columns that uniquely identifies a row in another table. Foreign keys are the invisible threads that knit all the tables in our database together.

It is the foreign keys, telling us how the rows in one table are related to the rows in another table, that turn a database into a relational database. It is the foreign key that takes data and begins to turn it into information. What is a relational database? What are tables, columns and row? What are the main data types? What are primary keys and foreign keys? Fortnightly newsletters help sharpen your skills and keep you ahead, with articles, ebooks and opinion to keep you informed.

His tools of choice are Oracle technologies and he has over a decade of experience building applications with Oracle Forms, Oracle Application Development Framework and Oracle Application Express. David holds a degree in Accountancy and earned his bread as a short story writer and a magazine editor and columnist before turning to IT. David can be contacted at about. As the overall shared pool changes in size, so does the dictionary cache. The server result cache The server result cache has two parts: SQL result cache: This situation lets Oracle skip the execution part of the, er, execution, for lack of a better term, and go directly to the result set if it exists.

What if your data changes? The SQL result cache works best on relatively static data like the description of an item on an e-commerce site. Should you worry about the result cache returning incorrect data?

Tutorial for Oracle10g Forms

Not at all. Oracle automatically invalidates data stored in the result cache if any of the underlying components are modified. For example, say you have a function that calculates the value of the dollar based on the exchange rate of the Euro.

You might not want to store that actual value since it changes constantly. Instead, you have a function that calls on a daily or hourly rate to determine the value of the dollar.

In a financial application, this call could happen thousands of times an hour. If the rate does change, Oracle re- executes the function and updates the result cache. The reserved pool When Oracle needs to allocate a large chunk over 5 KB of contiguous memory in the shared pool, it allocates the memory in the reserved pool. Dedicating the reserved pool to handle large memory allocations improves performance and reduces memory fragementation. Least Recently Used algorithm If the library cache is short on space, objects are thrown out.

Statements that are used the most stay in the library cache the longest. If your desk is cluttered, what do you put away first? The stuff you use the least. Basically, the heap area is a bunch of tion you do find may or may not be accurate. Oracle determines their sizes and tunes look at the dynamic performance view in the them accordingly. Database buffer cache The database buffer cache is typically the largest portion of the SGA. It has data that comes from the files on disk.

The database buffer cache can contain data from all types of objects: A database block is the minimum amount of storage that Oracle reads or writes. All storage segments that contain data are made up of blocks. When you request data from disk, at minimum Oracle reads one block. Even if you request only one row, many rows in the same table are likely to be retrieved. The same goes if you request one column in one row. Oracle reads the entire block, which most likely has many rows, and all columns for that row.

Buffer cache state The buffer cache controls what blocks get to stay depending on available space and the block state similar to how the shared pool decides what SQL gets to stay. The buffer cache uses its own version of the LRU algorithm. A block in the buffer cache can be in one of three states: The LRU algorithm works a little differently in the buffer cache than it does in the shared pool. It scores each block and then times how long it has been since it was accessed.

The higher the points, the less likely the block will be flushed from memory. However, it must be accessed frequently or the score decreases. A block has to work hard to stay in memory if the competition for memory resources is high. Giving each block a score and time prevents this type of situation from aris- ing: A block is accessed heavily at the end of the month for reports. Its score is higher than any other block in the system. That block is never accessed again. It sits there wasting memory until the database is restarted or another block finally scores enough points to beat it out.

The time component ages it out very quickly after you no longer access it. Pinned blocks A block currently being accessed is a pinned block. The block is locked or pinned into the buffer cache so it cannot be aged out of the buffer cache while the Oracle process often representing a user is accessing it.

Dirty blocks A modified block is a dirty block. To make sure your changes are kept across database shutdowns, these dirty blocks must be written from the buffer cache to disk. The database names dirty blocks in a dirty list or write queue. Understanding Oracle Database Architecture 25 You might think that every time a block is modified, it should be written to disk to minimize lost data. Several structures help prevent lost data. Furthermore, Oracle has a gambling problem.

System performance would crawl if you wrote blocks to disk for every modification. To combat this, Oracle plays the odds that the database is unlikely to fail and writes blocks to disk only in larger groups.

Oracle is getting performance out of the database right now at the pos- sible expense of a recovery taking longer later. Block write triggers What triggers a block write and therefore a dirty block? You find out more about DDL in Chapter 6. The fact is the database stays pretty busy writing blocks in an environment where there are a lot changes. Redo log buffer The redo log buffer is another memory component that protects you from your- self, bad luck, and Mother Nature.

This buffer records every SQL statement that changes data. The statement itself and any information required to recon- struct it is called a redo entry. Redo entries hang out here temporarily before being recorded on disk. This buffer protects against the loss of dirty blocks. Imagine that you have a buffer cache of 1, blocks, and of them are dirty.

Then imagine a power supply goes belly up in your server, and the whole system comes crashing down without any dirty buffers being written. That data is all lost, right? Not so fast. The redo log buffer is flushed when these things occur: It seems redundant. The file that records this information is sequential.

It just records the redo entry. A block exists somewhere in a file. Oracle has to find out where, go to that spot, and record it.

The redo entry takes a split second to write, which reduces the window of opportunity for failure. It also returns your commit only if the write is successful.

10g for pdf oracle dummies

You know right away that your changes are safe. Not every- one uses the optional large pool component. The large pool relieves the shared pool of sometimes-transient memory requirements. These features use the large pool: The large pool has no LRU. Once it fills up if you size it too small the processes revert to their old behav- ior of stealing memory from the shared pool. The Java pool is an optional memory component.

In our experience, this configuration is relatively rare. In fact, we see this where Oracle-specific tools are installed. The fact is, even though Oracle has its own Java con- tainer, many other worthwhile competing alternatives are out there.

Oracle Streams is an optional data replication technology where you repli- cate reproduce the same transactions, data changes, or events from one database to another sometimes remote database. You would do this if you wanted the same data to exist in two different databases. The streams pool stores buffered queue messages and provides the memory used to capture and apply processes. By default, the value of this pool is zero and increases dynamically if Oracle Streams is in use.

Again, PGA used to be allocated out of the shared pool. In Oracle 9i, a memory structure called the instance PGA held all private information as needed. This alleviated the need for the shared pool to constantly resize its SQL area to meet the needs of individual sessions. Getting Started with Oracle 12c varies, as do their private memory needs, the instance PGA was designed for this type of memory usage. The PGA contains the following: Through the last several releases of Oracle, the database has become more automated in areas that were previ- ously manual and even tedious at times.

Exactly the opposite: When more mundane operations are automated, it frees you up as the DBA to focus on the more advanced features. It frees up our resources to focus on things such as high availability and security, areas that require near full-time attention.

We recommend that you manage memory automatically in Oracle 12c. For that reason, we cover only automatic management in this chapter.

Oracle 10g Tutorial

Understanding Oracle Database Architecture 29 Managing memory automatically When you create your database, you can set one new parameter that takes nearly all memory tuning out of your hands: By setting this parameter, all the memory areas discussed earlier in this chapter are automatically sized and managed. Answer these questions to help set the value: How much memory is available? What other applications are running on the machine?

Use this formula: To help determine whether you have enough memory, Oracle gives you some pointers if you know where to look. Getting Started with Oracle 12c Figure Every single session that connects to the database requires memory associ- ated with its OS or server process.

This memory requirement adds up. Understanding Oracle Database Architecture 31 There are no processes when the Oracle instance is shut down. It can also depend on your OS. Three types of processes are part of the instance: Server processes negotiate the actions of the users.

Background processes In Oracle 12c, you can have over background processes. Many are multiples of the same process for parallelism and taking advantage of systems with multiple CPUs. Table shows the most common background processes. By default, no processes have more than one instance of their type started. More advanced tuning features involve parallelism. It cleans up failed processes by releasing resources and rolling back uncommitted data. SMON The system monitor is primarily responsible for instance recov- ery.

If the database crashes and redo information must be read and applied, the SMON takes care of it. It also cleans and releases temporary space. There can be up to 20 of them, hence the n. It writes the redo entries to disk and signals a completion. CKPT The checkpoint process is responsible for initiating check points. A check point is when the system periodically dumps all the dirty buffers to disk. Most commonly, this occurs when the database receives a shutdown command. It also updates the data file headers and the control files with the check point information so the SMON know where to start recovery in the event of a system crash.

You might also like: PDF VIEWER FOR NOKIA 5233

ARCn Up to 30 archiver processes 0—9, a—t are responsible for copy- ing filled redo logs to the archived redo storage area. CJQ0 The job queue coordinator checks for scheduled tasks within the database. These jobs can be set up by the user or can be internal jobs for maintenance. When it finds a job that must be run it spawns the following goodie. J A job queue process slave actually runs the job. There can be up to 1, of them — DIA0 The diagnosability process resolves deadlock situations and investigates hanging issues.

VKTM The virtual keeper of time sounds like a fantasy game character but simply provides a time reference within the database. LREG The listener registration process, which registers database instance and dispatcher information with the Oracle listener process.

This allows incoming user connections to get from the listener to the database. This is related to perfor- mance tuning and troubleshooting.

Pdf dummies oracle for 10g

This is related to performance tuning and troubleshooting. However, those described in Table are the most common, and you will find them on almost all Oracle installations.

User and server processes Because user and server processes are intertwined, we discuss the two together. However, they are distinct and separate processes. As a matter of fact, they typically run on separate machines.

A very simple example: The Oracle background process list. Getting Started with Oracle 12c The server process serves and exists on the database server. It does anything the user requests of it.

It is responsible for reading blocks into the buffer cache. It changes the blocks if requested. It can create objects. Server processes can be one of two types: However, you can change it one way or the other later on. Dedicated server architecture Each user process gets its own server process.

This is the most common Oracle configuration. It allows a server process to wait on you. If the resources can support dedicated connections, this method also is the most responsive. However, it can also use the most memory. Imagine, though, 5, users on the system sitting idle most of the time. Shared server architecture Just as the name implies, the server processes are shared.

Now, instead of a server process waiting on you hand and foot, you have only one when you need it. Think of a server process as a timeshare for Oracle. On a system with 5, mostly idle users, you might be able to support them with only 50 server processes.

You must do these things for this to work properly: This works best in a fast transaction-based environment like an e-commerce site.

All the interprocess communica- tion seems to have small CPU cost associated with it over dedicated server processes. Most applications these days get around the problems associated with too many dedicated servers by using advanced connection pooling on the application server level. You should know about some other limitations: DBA connections must have a dedicated server. Therefore, a shared server environment is actually a hybrid.

Shared servers can coexist with a dedicated server. Many different types of files are required and optional to run an Oracle database: Getting Physical with Files Many types of files are created with your database. Some of these files are for storing raw data. Some are used for recovery. Some are used for housekeep- ing or maintenance of the database itself.

Data files: Where the data meets the disk Data files are the largest file types in an Oracle database. They store all the actual data you put into your database as well as the data Oracle requires to manage the database.

Data files are a physical structure: They exist whether the database is open or closed. Getting Started with Oracle 12c Data files are also binary in nature. The data is stored in an organized format broken up into Oracle blocks.

Oracle 10g Tutorial For Beginners - Learn Free Oracle 10g Tutorial pdf | Wisdom Jobs

Whenever a server process reads from a data file, it does so by reading at the very least one complete block. It puts that block into the buffer cache so that data can be accessed, modified, and so on. OS blocks are different from Oracle blocks.

OS blocks are physical, and their size is determined when you initially format the hard drive. You should know the size of your OS block. Most of the time Oracle data files have an extension of. DBF short for data- base file.

You could name it. XYZ, and it would function just fine. We feel it is best practice to stick with. DBF because that extension is used in 95 percent of databases. In every data file, the very first block stores the block header. To be spe- cific, depending on your Oracle block size, the data file header block may be several blocks. By default, the header block is 64k. Therefore, if your Oracle block size is 4k, then 16 header blocks are at the beginning of the file.

The space is then freed to the file either immediately after your opera- tion is done or as soon as you log out of the system. Understanding Oracle Database Architecture 37 Figure Data files listed.

Control files The control file is a very important file in the database — so important that you have several copies of it. Typically, control files are named with the extension. CTL or. Any exten- sion will work, but if you want to follow best practice, those two are the most popular.

Control files contain the following information: Typically, control files are some of the smaller files in the database. If you were to lose all of your control files in an unfortunate failure, it is a real pain to fix.

Redo log files Redo log files store the information from the log buffer. Typically, redo log files are named with the extension. LOG or. It can be anything you want, but best practice indicates one of those two extensions. Also, redo log files are organized into groups and members. Every database must have at least two redo log groups. Redo log files contain all the information necessary to recover lost data in your database. Every SQL statement that you issue changing data can be reconstructed by the information saved in these files.

The optimal size for your redo log files depends on how many changes you make to your database. The size is chosen by you when you set up the data- base and can be adjusted later. When the LGWR is writing to a redo log file, it does so sequentially. It starts at the beginning of the file and once it is filled up, it moves on to the next one. This is where the concept of groups comes in. Oracle fills each group and moves to the next. Once it has filled all the groups, it goes back to the first.

You could say they are written to in a circular fashion.

For dummies 10g pdf oracle

If you have three groups, it would go something like 1,2,3,1,2,3,. These things happen during a log switch operation: The LGWR finishes writing to the current group. By looking at all the things that occur when a log switch happens, you might agree that it is a fairly involved operation.

If you find that happening, consider increasing the size of each group. Similar to control files, redo log files should be configured with mir- rored copies of one another. And, as with control files, each member should be on a separate disk device. That way, if a disk fails and the database goes down, you still have recovery information available.

You should not lose any data. Each copy within a group is called a member. A common configuration might be three groups with two members apiece, for a total of six redo log files. The group members are written to simultaneously by the log writer. How many groups are appropriate? You want enough that the first group in the list can be copied off and saved before the LGWR comes back around to use it.

This can severely impact your system.

For oracle dummies pdf 10g

Thankfully, we rarely see this happen. How many members are appropriate? Two members on two disks seems to be pretty common. Well, not really. It can impact system performance while at the same time offering very little return. We commonly get this question: After all, if a disk fails, I have another one right there to pick up the slack. Oracle still recommends two members for each group as a best practice.

What if that controller writes corrupt gibberish? Now both your copies are corrupted. Separating your members across two different disks with different controllers is the safest bet. Getting Started with Oracle 12c Moving to the archives Archive log files are simply copies of redo log files.

Most archive log files have the extension. ARCH, or. We try to use. ARC as that seems most common. Not all databases have archive log files. It depends on whether you turn on archiving. By turning on archiving, you can recover from nearly any type of failure providing two things: You have a full backup.

The ARCn process has to copy each redo log group as it fills up. It takes extra processing to copy the redo logs via the ARCn process. You have to keep all the archive logs created between each backup. Relatively speaking, each of these costs is small in terms of the return you get: We typically recommend that, across the board, all production databases archive their redo logs. You can easily just copy your production database to revive a broken test. Sometimes the test database is important enough to archive.

You should keep archive log files for recovery between each backup. Now say that your database loses files due to a disk failure on Wednesday. The recovery process would be restoring the lost files from the last backup and then telling Oracle to apply the archive log files from Sunday all the way up to the failure on Wednesday.

They should go to two different desti- nations on different devices, just like the others. Understanding Oracle Database Architecture 41 Server and initialization parameter files Server and initialization parameter files are the smallest files on your system: Typically, these files end with an. ORA extension. Personally, we have never seen anything but that. This is where you configure the following settings: Over parameters to configure and tweak? The fact is 99 percent of your database configuration is done with about 30 of the main parameters.

The rest of the parameters are for uncom- mon configurations that require more expert adjustment. As a matter of fact, of those 1,, over 1, are hidden. Sorry if we scared you a little there. We just want you to have the whole picture. Whenever you start your database, the very first file read is the parameter file. It sets up all your memory and process settings and tells the instance where the control files are located. It also has information about your archiving status. For example, if your database name is dev12c, the files would be named as follows: Getting Started with Oracle 12c By naming them this way and putting them in the appropriate directory, Oracle automatically finds them when you start the database.

Applying Some Logical Structures After you know the physical structures, you can break them into more logical structures.

All the logical structures that we talk about are in the data files. Logical structures allow you to organize your data into manageable and, well, logical, pieces. The arrow points in the direction of a one-to-many relationship. The rela- tionship between logical and physical structures in the database. Understanding Oracle Database Architecture 43 Tablespaces Tablespaces are the first level of logical organization of your physical storage.

Every 12c database should have the following tablespaces: Stores the rollback or undo segments used for transaction recovery. For temporary storage. Typically, each tablespace might start attached to one data file, but as the database grows and your files become large, you may decide to add storage in the form of multiple data files. You create some areas to store your data. Say your database is going to have sales, human resources, accounting data, and historical data.

You might have the following tablespaces: Getting Started with Oracle 12c You can harden our databases against complete failure. We discuss actual tablespace creation in Chapter 7. Keep in mind that when deciding on the logical organization, it pays to sit down and map out all the different activities your database will support. If possible, create tablespaces for every major application and its associated indexes.

If your database has especially large subsets of data, sometimes it pays to separate that data from your regular data as well. Those pictures probably never change. If you have a tablespace dedicated to them, you can make it read only. The tablespace is taken out of the checkpointing process. You can also back it up once, and then do it again only after it changes. That reduces the storage required for backups, plus it speeds up your backup process.

Segments Segments are the next logical storage structure after tablespaces. Segments are objects in the database that require physical storage and include the following: Understanding Oracle Database Architecture 45 Whenever you create a segment, specify what tablespace you want it to be part of.

This helps with performance. Extents Extents are like the growth rings of a tree. Whenever a segment grows, it gains a new extent. When you first create a table to store items, it gets its first extent.

As you insert data into that table, that extent fills up. When the extent fills up, it grabs another extent from the tablespace. When you start creating objects, that free space gets assigned to segments in the form of extents. Your average tablespace is made up of used extents and free space. When all the free space is filled, that data file is out of space. For example, when you create an items table and insert the first 1, items, it may grow and extend several times.

Now your segment might be made up of five extents. However, you also create a new table. As each table is created in a new tablespace, it starts at the beginning of the data file. After you create your second table, your first table may need to extend again. Its next extent comes after the second extent. In the end, all objects that share a tablespace will have their extents intermingled. In years past, before Oracle had better algorithms for storage, DBAs spent a lot of their time and efforts trying to coalesce these extents.

It was called fragmentation. Just let it be.

Related Documents


Copyright © 2019 meteolille.info. All rights reserved.