This course covers driver installation and shows you how to configure server management and resolve issues with synchronization, concurrency, power management and more. You'll see how to create and manage your own driver within Linux and have a better experience using the Linux terminal. You'll set up and configure your Ubuntu machine to develop drivers catered to audio and TPM drivers for Ubuntu. You'll also learn to push your live drivers to import them into the core components of the OS.
Finally, you'll learn to manage memory on your system in the most efficient way by focusing on many different techniques. This is the eBook version of the printed book. The Most Practical Guide to Writing Linux Device Drivers Linux now offers an exceptionally robust environment for driver development: with today's kernels, what once required years of development time can now be accomplished in days.
In this practical, example-driven book, one of the world's most experienced Linux driver developers systematically demonstrates how to develop reliable Linux drivers for virtually any device. Essential Linux Device Dri. Having already helped two generations of programmers explore Linux and write devices, the fourth edition of this classic book delves into tty, USB, and HCI devices such as keyboards, in addition to basic character devices.
Linux Device Drivers includes numerous full-featured examples that you can compile and run without special hardware. Written by well-known leaders in Linux development and programming, this book covers significant changes to Version 3. All you need to get started is an understanding of the C programming language and some background in Unix system calls. Learn how to support computer peripherals under the Linux operating system Develop and write software for new hardware that Linux supports Understand the basics of Linux operation, even if you don't expect to write a driver Dive into new chapters on video, audio, wireless, and Bluetooth devices As the operating system for Android and many embedded systems, Linux constantly needs new device drivers.
This book helps you get it done. Author : Ashfaq A. Linux is becoming the OS of choice for embedded system designers and engineers, due to its real-time power and flexibility. Written for engineers and students, Practical Linux Programming: Device Drivers, Embedded Systems, and the Internet is about designing and developing embedded systems, using Internet technology as a user interface.
The book emphasizes the use of three different technologies for embedded system design and development: the Web, the Linux kernel, and SQL queries.
From a software design point of view, device driver design, interprocess communication usage, Perl programming, shell programming, HTML tags, and SQL queries are covered in detail. The examples demonstrate the guidelines for designing an embedded system that requires interaction of different software modules and show how an operating system like Linux helps glue your software modules together.
The book is presented as a tutorial for students and engineers who wish to learn the process of designing an embedded system application using Linux as the real-time operating system and the Internet as the user interface. Embedded Linux Development is designed to give experienced programmers a solid understanding of adapting the Linux kernel and customized user-space libraries and utilities to embedded applications such as those in use in consumer electronics, military, medical, industrial, and auto industries.
This five day course includes extensive hands-on exercises and demonstrations designed to give you the necessary tools to develop an embedded Linux device. Master the techniques needed to build great, efficient embedded devices on Linux About This Book Discover how to build and configure reliable embedded Linux devices This book has been updated to include Linux 4.
It is also for Linux developers and system programmers who are familiar with embedded systems and want to learn and program the best in class devices. It is appropriate for students studying embedded techniques, for developers implementing embedded Linux devices, and engineers supporting existing Linux devices. What You Will Learn Evaluate the Board Support Packages offered by most manufacturers of a system on chip or embedded module Use Buildroot and the Yocto Project to create embedded Linux systems quickly and efficiently Update IoT devices in the field without compromising security Reduce the power budget of devices to make batteries last longer Interact with the hardware without having to write kernel device drivers Debug devices remotely using GDB, and see how to measure the performance of the systems using powerful tools such as perk, ftrace, and valgrind Find out how to configure Linux as a real-time operating system In Detail Embedded Linux runs many of the devices we use every day, from smart TVs to WiFi routers, test equipment to industrial controllers - all of them have Linux at their heart.
Linux is a core technology in the implementation of the inter-connected world of the Internet of Things. You should always use udelay since ndelay precision depends on how accurate your hardware timer is not always the case on an embedded SOC. Use of mdelay is also discouraged. Timer handlers callbacks are executed in an atomic context, meaning that sleeping is not allowed at all. By sleeping, I mean any function that may result in sending the caller to sleep, such as allocating memory, locking a mutex, an explicit call to the sleep function, and so on.
Kernel locking mechanism Locking is a mechanism that helps shares resources between different threads or processes. A shared resource is data or a device that can be accessed by at least two users, simultaneously or not.
Locking mechanisms prevent abusive access, for example, a process writing data when another one is reading in the same place, or two processes accessing the same device the same GPIO, for example. The kernel provides several locking mechanisms. The most important are:. We will only learn about mutexes and spinlocks, since they are widely used in device drivers. Mutex Mutual exclusion mutex is the de facto, most-used locking mechanism. The principle of sleeping is the same.
The kernel then schedules and executes other tasks. Sometimes, you may only need to check whether a mutex is locked or not. What this function does is just check whether the mutex's owner is empty NULL or not. The following are some of them:.
Only one task can hold the mutex at a time; this is actually not a rule, but a fact Multiple unlocks are not permitted They must be initialized through the API A task holding the mutex may not exit, since the mutex will remain locked, and possible contenders will wait sleep forever Memory areas where held locks reside must not be freed Held mutexes must not be reinitialized Since they involve rescheduling, mutexes may not be used in atomic contexts, such as tasklets and timers.
If any one and only one of them is awakened and scheduled, they are woken in the same order in which they were put to sleep. Any thread that needs to acquire the spinlock will active-loop until the lock is acquired, which breaks out of the loop. This is the point where mutex and spinlock differ. Since spinlock heavily consumes the CPU while looping, it should be used for very quick acquires, especially when the time to hold the spinlock is less than the time to reschedule.
Spinlock should be released as soon as the critical task is done. In order to avoid wasting CPU time by scheduling a thread that may probably spin, trying to acquire a lock held by another thread moved off the run queue, the kernel disables preemption whenever code holding a spinlock is running.
With preemption disabled, we prevent the spinlock holder from being moved off the run queue, which could lead waiting processes to spin for a long time and consume CPU. As long as you hold a spinlock, other tasks may be spinning while waiting on it. By using spinlock, you assert and guarantee that it will not be held for a long time.
You can say it is better to spin in a loop, wasting CPU time, than the cost of sleeping your thread, context- shifting to another thread or process, and being woken up afterward. Spinning on a processor means no other task can run on that processor; it then makes no sense to use spinlock on a single core machine. In the best case, you will slow down the system; in the worst case, you will deadlock, as with mutexes. This is code that does lock acquisition and release.
It is an IRQ handler, but let's just focus on the lock aspect. Spinlock versus mutexes Used for concurrency in the kernel, spinlocks and mutexes both have their own objectives:. Mutexes protect the process's critical resources, whereas spinlocks protect the IRQ handler's critical sections Mutexes put contenders to sleep until the lock is acquired, whereas spinlocks infinitely spin in a loop consuming CPU until the lock is acquired Because of the previous point, you can't hold spinlocks for a long time, since waiters will waste CPU time waiting for the lock, whereas a mutex can be held as long as the resource needs to be protected, since contenders are put to sleep in a wait queue.
When dealing with spinlocks, please keep in mind that preemption is disabled only for threads holding spinlocks, not for spinning waiters. Work deferring mechanism Deferring is a method by which you schedule a piece of work to be executed in the future.
It's a way to report an action later. Obviously, the kernel provides facilities to implement such a mechanism; it allows you to defer functions, whatever their type, to be called and executed later.
There are three of them in the kernel:. Softirqs and ksoftirqd A software IRQ softirq , or software interrupt is a deferring mechanism used only for very fast processing, since it runs with a disabled scheduler in an interrupt context. You'll rarely almost never want to deal with softirq directly. There are only networks and block device subsystems using softirq.
Tasklets are an instantiation of softirqs, and will be sufficient in almost every case when you feel the need to use softirqs. They are then queued by the kernel in order to be processed later. Ksoftirqds are responsible for late execution process context this time. A ksoftirqd is a per-CPU kernel thread raised to handle unserved software interrupts:. CPU-consuming ksoftirqd may indicate an overloaded system or a system under interrupts storm, which is never good.
Tasklets Tasklets are a bottom-half we will see what this means later mechanism built on top of softirqs. Tasklets are not re-entrant by nature. Code is called reentrant if it can be interrupted anywhere in the middle of its execution, and then be safely called again. Tasklets are designed such that a tasklet can run on one and only one CPU simultaneously even on an SMP system , which is the CPU it was scheduled on, but different tasklets may be run simultaneously on different CPUs.
The tasklet API is quite basic and intuitive. Globally, setting the count field to 0 means that the tasklet is disabled and cannot be executed, whereas a nonzero value means the opposite.
The kernel maintains normal priority and high priority tasklets in two different lists. High priority tasklets are meant to be used for soft interrupt handlers with low latency requirements. There are some properties associated with tasklets you should know:. High priority tasklets are always executed before normal ones. Abusive use of high priority tasks will increase system latency. Only use them for really quick stuff.
Let's check. Work queues Added since Linux kernel 2. It is the last one we will talk about in this chapter. As a deferring mechanism, it takes an opposite approach to the others we've seen, running only in a pre-emptible context. It is the only choice when you need to sleep in your bottom half I will explain what a bottom half is later in the next section.
Keep in mind that work queues are built on top of kernel threads, and this is the reason why I decided not to talk about the kernel thread as a deferring mechanism at all.
However, there are two ways to deal with work queues in the kernel. First, there is a default shared work queue, handled by a set of kernel threads, each running on a CPU. Once you have work to schedule, you queue that work into the global work queue, which will be executed at the appropriate moment. The other method is to run the work queue in a dedicated kernel thread. It means whenever your work queue handler needs to be executed, your kernel thread is woken up to handle it, instead of one of the default predefined threads.
Structures and functions to call are different, depending on whether you chose a shared work queue or dedicated ones. Kernel-global work queue — the shared queue Unless you have no choice, you need critical performance, or you need to control everything from work queue initialization to work scheduling, and if you only submit tasks occasionally, you should use the shared work queue provided by the kernel.
With that queue being shared over the system, you should be nice, and should not monopolize the queue for a long time. Since the execution of the pending task on the queue is serialized on each CPU, you should not sleep for a long time because no other task on the queue will run until you wake up.
You won't even know who you share the work queue with, so don't be surprised if your task takes longer to get the CPU. Since we are going to use the shared work queue, there is no need to create a work queue structure. There are three functions to schedule work on the shared work queue:.
You can flush the shared work queue with:. This is a common way to pass data to the work queue handler. There are four steps involved prior to scheduling your work in your own kernel thread:. Create your work function 3.
These functions return false if the work was already on a queue and true if otherwise. New incoming enqueued work does not affect the sleep. You may typically use this in driver shutdown handlers. The work will be cancelled even if it requeues itself. You must also ensure that the work queue on which the work was last queued can't be destroyed before the handler returns.
Since Linux kernel v4. You must check whether the function returns true or not, and makes sure the work does not requeue itself. You must then explicitly flush the work queue: if! The other is a different version of the same method and will create only a single thread for all the processors. This is just standard work for which the kernel provides a custom API that simply wraps around the standard one.
Comparisons between kernel predefined work queue functions and standard work queue functions are shown as follows:. Kernel threads Work queues run on top of kernel threads. You already use kernel threads when you use work queues.
This is why I have decided not to talk about the kernel thread API. Kernel interruption mechanism An interrupt is the way a device halts the kernel, telling it that something interesting or important has happened.
These are called IRQs on Linux systems. The main advantage interrupts offer is to avoid device polling. It is up to the device to announce any change in its state; it is not up to us to poll it. In order to get notified when an interrupt occurs, you need to register to that IRQ, providing a function called an interrupt handler that will be called every time that interrupt is raised.
Registering an interrupt handler You can register a callback to be run when the interruption or interrupt line you are interested in gets fired. Other elements of the preceding code are outlined in detail as follows:. Each device sharing the same line must have this flag set. If omitted, only one handler can be registered for the specified IRQ line. It instructs the kernel not to re-enable the interrupt when the hardirq handler has finished.
It will remain disabled until the threaded handler has been run. In older kernel versions until v2. This flag is no longer used. This should be unique to each registered handler, since it is used to identify the device. The common way of using it is to provide a device structure, since it is both unique and potentially useful to the handler.
Both parameters are given to your handler by the kernel. There are only two values the handler can return, depending on whether your device originated the IRQ or not:.
When writing the interrupt handler, you don't have to worry about re- entrance, since the IRQ line serviced is disabled on all processors by the kernel in order to avoid recursive interrupts. Interrupt handler and lock It goes without saying that you are in an atomic context and must only use spinlock for concurrency. An interrupt handler will always have priority on the user task, even if that task is holding a spinlock. Simply disabling IRQ is not sufficient. An interrupt may happen on another CPU.
It would be a disaster if a user task updating the data were interrupted by an interrupt handler trying to access the same data. Concept of bottom halves Bottom halves are mechanisms by which you split interrupt handlers into two parts.
This introduces another term, which is top half. Before discussing each of them, let's talk about their origin, and what problem they solve. The problem — interrupt handler design limitations Whether an interrupt handler holds a spinlock or not, preemption is disabled on the CPU running that handler. The more you waste time in the handler, the less CPU time is granted to the other task, which may considerably increase the latency of other interrupts and so increase the latency of the whole system.
The challenge is to acknowledge the device that raised the interrupt as quickly as possible in order to keep the system responsive. On Linux systems actually on all OSes, by hardware design , any interrupt handler runs with its current interrupt line disabled on all processors, and sometimes you may need to disable all interrupts on the CPU actually running the handler, but you definitely don't want to miss an interrupt. To meet this need, the concept of halves has been introduced.
The solution — bottom halves This idea consists of splitting the handler into two parts:. All interrupts that are disabled must have been re- enabled just before exiting the bottom half.
The second part, called the bottom half, will process time-consuming stuff, and run with the interrupt re-enabled. This way, you have the chance not to miss an interrupt.
Bottom halves are designed using a work-deferring mechanism, which we have seen previously. Depending on which one you choose, it may run in a software interrupt context, or in a process context.
Bottom half mechanisms are:. Bottom halves are not always possible. But when possible, they are certainly the best thing to do. Tasklets as bottom halves The tasklet deferring mechanism is most used in DMA, network, and block device drivers. In the preceding sample, we used either a wait queue or a work queue in order to wake up a possibly sleeping process waiting for us, or schedule work depending on the value of a register.
Softirqs as bottom half As said at the beginning of this chapter, we will not discuss softirq. Tasklets will be enough wherever you feel the need to use softirqs. Anyway, let's talk about their defaults. Softirqs run in a software interrupt context, with preemption disabled, holding the CPU until they complete. Softirqs should be fast; otherwise they may slow the system down.
When, for any reason, a softirq prevents the kernel from scheduling other tasks, any new incoming softirq will be handled by ksoftirqd threads, running in a process context. With threaded IRQs, the way you register an interrupt handler is a bit simplified. You do not even have to schedule the bottom half yourself.
The core does that for us. The bottom half is then executed in a dedicated kernel thread. It represents the top-half function, which runs in an atomic context or hard-IRQ. Wherever you would have used the work queue to schedule the bottom half, threaded IRQs can be used.
Useful for one shot interrupts. The IRQ line will then be re-enabled after the bottom half has finished. Invoking user space applications from the kernel User-space applications are, most of the time, called from within the user space by other applications. Its use is quite simple; just a look inside kmod. You may be wondering what this API was defined for.
It is used by the kernel, for example, for module un loading and cgroup management. Summary In this chapter, we discussed the fundamental elements with which to start driver development, presenting every mechanism frequently used in drivers. This chapter is very important, since it discusses topics other chapters in this book rely on.
The next chapter, for example, dealing with character devices, will use some elements discussed in this chapter. This is the basic concept of Linux that says everything is a file. A character device driver represents the most basic device driver in the kernel source. Character Device Drivers Chapter 4. Do note that they are not only files present in that directory. A character device file is recognizable by its type, which we can display thanks to the ls -l command.
Major and minor identify and tie the devices with the drivers. Given the preceding excerpt, the first character of the first column identifies the file type.
Possible values are:. X represents the major, and Y is the minor. That is one of the classic methods for identifying a character device file from user space, as well as its major and minor. The major is represented with only 12 bits, whereas the minor is coded on the 20 remaining bits. The device is registered with a major number that identifies the device, and a minor, which you may use as an array index to a local list of devices, since one instance of the same driver may handle several devices while different drivers may handle different devices of the same type.
Device number allocation and freeing Device numbers identify device files across the system. That means there are two ways to allocate these device numbers actually major and minor :.
You should avoid using this as much as possible. This method returns 0 on success, or a negative error code on failure. This is the recommended way to obtain a valid device number.
It represents the first number the kernel assigned. The difference between the two is that with the former, you should know in advance what number we need. This is registration: you tell the kernel what device numbers you want. This may be used for pedagogic purposes, and works as long as the only user of the driver is you.
When it comes to loading the driver on another machine, there is no guarantee the chosen number is free on that machine, and this will lead to conflicts and trouble. The second method is cleaner and much safer, since the kernel is responsible for guessing the right numbers for us.
We do not even have to bother about what the behavior would be on loading the module on to another machine, since the kernel will adapt accordingly. Anyway, the preceding functions are generally not called directly from the driver, but masked by the framework on which the driver relies IIO framework, input framework, RTC, and so on , by means of a dedicated API. These frameworks are all discussed in further chapters in this book. Introduction to device file operations Operations that you can perform on files depend on the drivers that manage those files.
The preceding excerpt only lists important methods of the structure, especially the ones that are relevant for the needs of this book. Each of these callbacks is linked with a system call, and none of them is mandatory. If yes, it simply runs it.
If not, it returns an error code that varies depending on the system call. The difference between struct inode and struct file is that an inode doesn't track the current position within the file or the current mode. On the other hand, struct file is used as a generic structure it actually holds a pointer to a struct inode structure that represents and open file and provides a set of functions related to methods you can perform on the underlying file structure.
Such methods are open, write, seek, read, select, and so on. All this reinforces the philosophy of UNIX systems that says everything is a file. In other words, a struct inode represents a file in the kernel, and a struct file describes it when it is actually open.
There may be different file descriptors that represent the same file opened several times, but these will point to the same inode. Allocating and registering a character device Character devices are represented in the kernel as instances of struct cdev. To reach that goal, there are some steps we must go through, which are as follows:. Writing file operations After introducing the preceding file operations, it is time to implement them in order to enhance the driver capabilities and expose the device's methods to the user space by means of system calls of course.
Each of these methods has its particularities, which we will highlight in this section. Exchanging data between kernel space and user space This section does not describe any driver file operation but instead introduces some kernel facilities that you may use to write these driver methods. The driver's write method consists of reading data from user space to kernel space, and then processing that data from the kernel.
Such processing could be something like pushing the data to the device, for example. On the other hand, the driver's read method consists of copying data from the kernel to the user space. Both of these methods introduces new elements we need to discuss prior to jumping to their respective steps. This allows us to introduce different kernel functions needed to access such memory, either to read or write.
Each of these returns the number of bytes that could not be copied. On success, the return value should be 0. A single value copy When it comes to copying single and simple variables, such as char and int, but not larger data types, such as structures or arrays, the kernel offers dedicated macros in order to quickly perform the desired operation. In other words, they must have or point to the same type.
Please do note that x is set to 0 on error. The result of dereferencing ptr must be assignable to x without a cast. Guess what it means. The open method open is the method called every time someone opens your device's file. Device opening will always be successful in cases where this method is not defined. You usually use this method to perform device and data structure initialization, and return a negative error code if something goes wrong, or 0.
Per-device data For each open performed on your character device, the callback function will be given a struct inode as a parameter, which is the kernel lower-level representation of the file. Here is an open method sample. The release method The release method is called when the device gets closed, the reverse of the open method.
You must then undo everything you have done in the open task. What you have to do is roughly:. Free any private memory allocated during the open step 2. Shut down the device if supported and discard every buffer on the last closing if the device supports multi opening, or if the driver can handle more than one device at a time.
The write method The write method is used to send data to the device; whenever a user app calls the write function on the device's file, the kernel implementation is called.
Steps to write The following steps do not describe any standard nor universal method to implement the driver's write method. They are just an overview of what kinds of operation you can perform in this method:. Check for bad or invalid requests coming from the user space. Adjust count for the remaining bytes in order to not go beyond the file size. Find the location from which you will start to write. This step is relevant only if the device has a memory in which the write method is supposed to write given data.
Increase the current position of the cursor in the file, according to the number of bytes written. The following is an example of the write method. Steps to read 1. The number of bytes read can't go beyond the file size.
The llseek method The llseek function is called when you move the cursor position within a file. The entry point of this method in user space is lseek. You can refer to the man page in order to print the full description of either method from user space: man llseek and man lseek.
The return value is the new position in the file. Steps to llseek 1. The following is an example of a user program that successively reads and seeks into a file. A user process can run poll , select , or epoll system calls to add a set of files to a list on which it needs to wait, in order to be aware of the associated devices' readiness if any.
The kernel will then call the poll entry of the driver associated with each device file. The usual way is to use a wait queue per event type one for readability, another one for writability, and eventually one for exception if needed , according to events supported by the select or poll system call. In the following example, we assume the device supports both blocking read and write. Of course, you may implement only one of these. If the driver does not define this method, the device will be considered as always readable and writable, so that poll or select system calls return immediately.
Steps to poll When you implement the poll function, either the read or write method may change:. You can notify readable events either from within the driver's write method, meaning that the written data can be read back, or from within an IRQ handler, meaning that an external device sent some data that can be read back. On the other hand, you can notify writable events either from within the driver's read method, meaning that the buffer is empty and can be filled again, or from within an IRQ handler, meaning that the device has completed a data-send operation and is ready to accept data again.
The wait queue used in the poll must be used in read too. When the user needs to read, if there is data, that data will be sent immediately to the process and you must update the wait queue condition set to false ; if there is no data, the process is put to sleep in the wait queue. If the write method is supposed to feed data, then in the write callback, you must fill the data buffer, update the wait queue condition set to true , and wake up the reader see the section wait queue.
If it is an IRQ instead, these operations must be performed in their handler. The ioctl method A typical Linux system contains around system calls syscalls , but only a few of them are linked with file operations. Sometimes devices may need to implement specific commands that are not provided by system calls, and especially the ones associated with files and thus device files.
You can use it to send special commands to devices reset, shutdown, configure, and so on. In order to be valid and safe, an ioctl command needs to be identified by a number, which should be unique to the system. The uniqueness of ioctl numbers across the system will prevent it from sending the right command to the wrong device, or passing the wrong argument to the right command given a duplicated ioctl number.
Linux provides four helper macros to create an ioctl identifier, depending on whether there is data transfer or not and on the direction of the transfer. A number coded on 8 bits 0 to , called a magic number 2. A sequence number or command ID, also on 8 bits 3. A data type, if any, that will inform the kernel about the size to be copied.
Generating ioctl numbers command You should generate their own ioctl number in a dedicated header file. It is not mandatory, but it is recommended, since this header should be available in user space too. In other words, you should duplicate the ioctl header file so that there is one in the kernel and one in the user space, which you can include in user apps.
Let's now generate ioctl numbers in a real example:. Steps for ioctl First, let's have a look at its prototype. There is only one step: use a switch If you think your ioctl command will need more than one argument, you should gather those arguments in a structure and just pass a pointer from the structure to ioctl. This consists of naming the member you need to assign a value to. The form is. This allows, among other things, initializing the members in an undefined order, or leaving the fields that we do not want to modify unchanged.
Summary In this chapter, we have demystified character devices and we have seen how to let users interact with our driver through device files. We learned how to expose file operations to the user space and control their behavior from within the kernel.
We went so far that you are even able to implement multi-device support. The next chapter is a bit hardware- oriented since it deals with platform drivers, which expose hardware device capabilities to the user space. The power of character drivers combined with platform drivers is just amazing. See you in the next chapter. They are handled by the kernel as soon as they are plugged in. But, other device types also exist, which are not hot-pluggable and that the kernel needs to know about prior to being managed.
Such buses are hardware devices called controllers. Since they are a part of SoC, they can't be removed, are non-discoverable, and are also called platform devices. People often say platform devices are on-chip devices embedded in the SoC. In practice, this is partially true, since they are hard-wired into the chip and can't be removed. But devices connected to I2C or SPI are not on- chip, and are platform devices too because they are not discoverable.
From an SoC point of view, those devices buses are connected internally through dedicated buses, and are most of the time proprietary and specific to the manufacturer. From the kernel point of view, these are root devices and connected to nothing. That is where the pseudo platform bus comes in. The pseudo platform bus, also called platform bus, is a kernel virtual bus for devices that are not seated on a physical bus known to the kernel.
In this chapter, platform devices refers to devices that rely on the pseudo platform bus. Register a platform driver with a unique name that will manage your devices 2. Platform devices along with their drivers Devices and driver matching mechanisms in the kernel Registering platform drivers with devices, as well as platform data. Platform drivers Before going any further, please pay attention to the following warning: not all platform devices are handled by platform drivers or should I say pseudo platform drivers.
Platform drivers are dedicated to devices not based on conventional buses. Everything needs to be done manually with the platform driver. The platform driver must implement a probe function, called by the kernel when the module is inserted or when a device claims it. Let's see what the meaning of each element that composes the structure is, and what they are used for:.
Later, we will see how probe is called by the core. The difference between those functions is that:. To prevent your driver from being inserted and registered in that list, just use the next function. If not, the driver is ignored. This method prevents the deferred probe, since it does not register the driver on the system. This is the case with most drivers. This macro will be responsible for registering our module with the platform driver core. It does not mean that those functions are not called anymore, just that we can forget about writing them ourselves.
The probe function is not a substitute to init function. The probe function is called every time a given device matches with the driver, whereas the init function runs only once, when the module gets loaded.
There are specific macros for each bus that you need to register the driver with. The following list is not exhaustive:. Platform devices Actually, we should have said pseudo platform device, since this section concerns devices that sit on pseudo platform buses. When you are done with the driver, you will have to feed the kernel with devices needing that driver.
Resources and platform data At the opposite end to hot-pluggable devices, the kernel has no idea what devices are present on your system, what they are capable of, or what they need in order to work properly. There is no auto-negotiation process, so any information provided to the kernel would be welcome.
Device provisioning — the old and deprecated way This method is to be used with the kernel version that does not support a device tree. With this method, drivers remain generic and devices are registered in board-related source files.
Resources Resources represent all the elements that characterize the device from the hardware point of view, and that the device needs in order to be set up and work properly. Once you have provided the resources, you need to extract them back to the driver in order to work with them. The probe function is a good place to extract them.
Let's see how to pick them. The first parameter is an instance of the platform device itself. The second parameter says what kind of resource we need. The num parameter is an index that says which resource type is desired. Zero indicates the first one, and so on. Platform data Any other data whose type is not a part of the resource types enumerated in the preceding section falls here for example, GPIO.
Let's say, for example, that you declare a platform device that needs two GPIO numbers as platform data, one IRQ number, and two memory regions as resources. The following example shows how to register platform data along with the device. Devices are registered along with their resources and data.
The name of the device is very important, and is used by the kernel to match the driver with the same name.
Device provisioning — the new and recommended way In the first method, any modification will necessitate rebuilding the whole kernel. In order to keep things simple and separate device declarations since they are not really part of the kernel from the kernel source, a new concept has been introduced: the device tree. The main goal of DTS is to remove very specific and never-tested code from the kernel.
With the device tree, platform data and resources are homogeneous. The device tree is a hardware description file and has a format similar to a tree structure, where every device is represented with a node, and any data or resource or configuration data is represented as the node's property. This way, you only need to recompile the device tree when you make some modifications.
The device tree forms the subject of the next chapter, and we will see how to introduce it to the platform device. By the end of this book, you will be comfortable with the concept of device driver development and will be in a position to write any device driver from scratch using the latest kernel version v4.
Linux started as a hobby project in for a Finnish student, Linus Torvalds. The project has gradually grown and still does, with roughly 1, contributors around the world.
Nowadays, Linux is a must, in embedded systems as well as on servers. A kernel is a center part of an operating system, and its development is not so obvious. This book tries to be as generic as possible. There is a special topic, device tree, which is not a full x86 feature yet.
That topic will then be dedicated to ARM processors, and all those fully supporting the device tree. Why those architectures? Because they are most used on desktop and servers for x86 , and on embedded systems ARM.
Before you start any development, you need to set an environment up. The environment dedicated to Linux development is quite simple, at least on Debian-based systems:. You should install gcc-arm as well:.
I'm running Ubuntu My favorite editor is Vim, but you are free to use the one you are most comfortable with. In the early kernel days until , odd-even versioning styles were used, where odd numbers were stable and even numbers were unstable. When the 2. Z, where:. This is called semantic versioning, and has been used until the 2.
Y scheme was adopted. When it came to the 3. This is the reason why the version has moved from 3. Now, the kernel uses an arbitrary X. Y versioning scheme, which has nothing to do with semantic versioning. The kernel must remain portable.
Any architecture-specific code should be located in the arch directory. The Linux kernel is a makefile-based project, with thousands of options and drivers. To configure your kernel, either use make menuconfig for an ncurse-based interface or make xconfig for an X-based interface. Once chosen, options will be stored in a. In most cases, there will be no need to start a configuration from scratch.
There are default and useful configuration files available in each arch directory, which you can use as a starting point:.
0コメント