To download the IBPS IT Officer Study Capsule-Click Here. For More IBPS SO Study Materials-Click Here. Online Mock Tests SBI PO. IBPS Specialist IT Officer Study Material/Plan Technical & Aptitude/Reasoning. IBPS Specialist IT officer Exam will contain the questions from analytical and technical ends. Being an IT officer, knowledge of Databases is mandatory which can be attained from best books of. Since IBPS Specialist Officer Examination is coming up next, we decided to share some collected study materials for IBPS SO IT Officer Professional Knowledge part. The IBPS SO IT Officer Professional Knowledge Study Materials contains 4 PDFs. 3 Concept Notes PDFs and one MCQ PDF.
|Language:||English, Spanish, Hindi|
|PDF File Size:||19.57 MB|
|Distribution:||Free* [*Regsitration Required]|
Here we are providing best Questions & Answer PDF for IBPS SO IT Subscribe (Buy) Current Affairs PDF - Pocket, Study and Q&A. The Computer Booster: IBPS - IT Officer Scale - I. VIDEO LECTURES. IBPS SO IT|OSI MODEL . STUDY NOTES FREE PDF. Study Note. Papertyari presents you the online study material for IBPS IT officer examination Let's start our discussion on some basic concepts of computer science.
Read about the security features of Networking as it is the main part of communication. Security is very important to be studied and study all measures of security. The questions from programming basics are often asked in exam paper. Practice the programs and code to attain efficiency.
DBMS is the integral part of every organization today. The Study material which you have keep in google drive is not accessible. The material is accessible to all and the candidates can use it well for the exam preparation. Your email address will not be published. Email Address: Save my name, email, and website in this browser for the next time I comment.
Skip to content.
Network Security Read about the security features of Networking as it is the main part of communication. Ankush Sangra. Please provide teh study material for General knowledge. December 5, Reply. October 19, Reply. November 11, Reply. Deepak Mehta. December 11, Reply. Mallikarjun Reddy. December 14, Reply. Pls send me some ibps so it officer model papers on this mail id December 19, Reply. Plea provide all sample last five years of it officer in ibps December 20, Reply. Siddharth Jain. Please Sir provide the last 3 years solved paper Marketing officer pdf file November 24, Reply.
Saheb Sarkar. A state is safe, if the system can allocate resources to each process and still avoid a deadlock. A system is in safe state, if there exists a safe sequence of all processes. A deadlock state is an unsafe state.
Not all unsafe states cause deadlocks. It is important to note that an unsafe state does not imply the existence or even the eventual existence a deadlock. What an unsafe state does imply is simply that some unfortunate sequence of events might lead to a deadlock. This techniques allow to keep in memory only those instructions and data, which are required at given time.
The other instruction and data is loaded into the memory space occupied by the previous ones when they are needed. Then, when one process has finished executing for one time quantum, it is swapped out of memory to a backing store. The memory manager then picks up another process from the backing store and loads it into the memory occupied by the previous process.
Then, the scheduler picks up another process and allocates the CPU to it. Memory management is the functionality of an operating system which handles or manages primary memory. Memory management keeps track of each and every memory location either it is allocated to some process or it is free. Single Partition Allocation: The memory is divided into two parts.
One to be used by as and the other one is for user programs. The as code and date is protected from being modified by user programs using a base register. Multiple Partition Allocation: The multiple partition allocation may be further classified as. Fixed Partition Scheme: Memory is divided into a number of fixed size partitions.
Then, each partition holds one process. This scheme supports multiprogramming as a number of processes may be brought into memory and the CPU can be switched from one process to another. When a process arrives for execution, it is put into the input queue of the smallest partition, which is large enough to hold it.
Variable Partition Scheme: A block of available memory is designated as a hole at any time, a set of holes exists, which consists of holes of various sizes scattered throughout the memory. When a process arrives and needs memory, this set of holes is searched for a hole which is large enough to hold the process. If the hole is too large, it is split into two parts.
The unused part is added to the set of holes.
All holes which are adjacent to each other are merged. There are different ways of implementing allocation of partitions from a list of free holes, such as:. It is a memory management technique, which allows the memory to be allocated to the process wherever it is available.
Physical memory is divided into fixed size blocks called frames. Logical memory is broken into blocks of same size called pages. The backing store is also divided into same size blocks.
When a process is to be executed its pages are loaded into available page frames. A frame is a collection of contiguous pages. Every logical address generated by the CPU is divided into two parts. The page number P and the page offset d. The page number is used as an index into a page table.
Each entry in the page table contains the base address of the page in physical memory f. The base address of the Pth entry is then combined with the offset d to give the actual address in memory. Separation of user logical memory from physical memory. It is a technique to run process size more than main memory. Virtual memory is a memory management scheme which allows the execution of a partially loaded process. Each file is referred to by its name. The file is named for the convenience of the users and when a file is named, it becomes independent of the user and the process.
Below are file attributes. One of the responsibilities of the OS is to use the hardware efficiently. For the disk drives, meeting this responsibility entails having fast access time and large disk bandwidth. Disk bandwidth is the total number of bytes transferred, divided by the total time between the first for service and the completion of last transfer.
FCFS Scheduling: It selects the request with the minimum seek time from the current head position. It is not an optimal algorithm but its improvement over FCFS. SCAN Scheduling: In the SCAN algorithm, the disk arm starts at one end of the disk and moves toward the other end, servicing requests as it reaches each cylinder until it gets to the other end of the disk.
At the other end, the direction of head movement is reversed and servicing continues. The head continuously scans back and forth across the disk. The SCAN algorithm is sometimes called the elevator algorithm, since the disk arm behaves just like an elevator in a building, first servicing all the request going up and then reversing to service requests the other way.
When the head reaches the other end, however it immediately returns to the beginning of the disk without servicing any requests on the return trip. The C-SCAN scheduling algorithm essentially treats the cylinders as a circular list that wraps around from the final cylinder to the first one.
The PCB contains important information about the specific process including The current state of the process i. Unique identification of the process in order to track "which is which" information.
A pointer to parent process. Similarly, a pointer to child process if it exists. The priority of process a part of CPU scheduling information. Pointers to locate memory of processes.
A register save area. The processor it is running on. Process State Model. Code for the program.
Program's static data. Program's dynamic data. Program's procedure call stack. Contents of general purpose registers. Operating Systems resource in use. New State: The process being created. Running State: A process is said to be running if it has the CPU, that is, process actually using the CPU at that particular instant.
Blocked or waiting State: Note that a process is unable to run until some external event happens. Ready State: A process is said to be ready if it use a CPU if one were available. A ready state process is runable but temporarily stopped running to let another process run. Terminated state: The process has finished execution. It is the module that gives control of the CPU to the process selected by the short term scheduler.
Functions of Dispatcher: A thread can be in any of several states Running, Blocked, Ready or Terminated. Each thread has its own stack. A thread has or consists of a program counter PC , a register set, and a stack space.
Threads are not independent of one other like processes as a result threads shares with other threads their code section, data section, OS resources also known as task, such as open files and signals.
Multi threading: An application typically is implemented as a separate process with several threads of control. There are two types of threads.
User threads: They are above the kernel and they are managed without kernel support. User-level threads implement in user-level libraries, rather than via systems calls, so thread switching does not need to call operating system and to cause interrupt to the kernel.
In fact, the kernel knows nothing about user-level threads and manages them as if they were single-threaded processes. Kernel threads: Kernel threads are supported and managed directly by the operating system. Instead of thread table in each process, the kernel has a thread table that keeps track of all threads in the system.
Advantages of Thread Thread minimizes context switching time. Use of threads provides concurrency within a process. Efficient communication. Economy- It is more economical to create and context switch threads. Utilization of multiprocessor architectures to a greater scale and efficiency.
Difference between Process and Thread: Inter-Process Communication: Processes executing concurrently in the operating system may be either independent or cooperating processes.
Any process that shares data with other processes is a cooperating process.
There are two fundamental models of IPC: Shared memory: In the shared memory model, a region of memory that is shared by cooperating process is established. Process can then exchange information by reading and writing data to the shared region. Message passing: In the message passing model, communication takes place by means of messages exchanged between the cooperating processes.
CPU Scheduling: CPU Scheduling algorithms: Easy to understand and implement. Poor in performance as average wait time is high.
Impossible to implement Processer should know in advance how much time process will take. Priority Based Scheduling Each process is assigned a priority. Process with highest priority is to be executed first and so on. Processes with same priority are executed on first come first serve basis.
Priority can be decided based on memory requirements, time requirements or any other resource requirement. Round Robin Scheduling Each process is provided a fix time to execute called quantum. Once a process is executed for given time period. Process is preempted and other process executes for given time period.