- 19 Apr, 2018 6 commits
-
-
Philippe Gerum authored
-
Philippe Gerum authored
-
Philippe Gerum authored
-
Philippe Gerum authored
-
Philippe Gerum authored
-
Philippe Gerum authored
-
- 18 Apr, 2018 3 commits
-
-
Philippe Gerum authored
Unlike glibc, libcobalt may return zero as a valid timer id. Use a threadobj status flag to figure out whether a periodic timer was set for the thread, instead of testing periodic_timer for NULLness. This fixes set_periodic/wait_period services which have been broken since commit #73de42cc was merged.
-
Philippe Gerum authored
-
Philippe Gerum authored
-
- 17 Apr, 2018 1 commit
-
-
Jan Kiszka authored
We are running this section with cancellation deferred, so no termination possible, thus nothing to clean up. Signed-off-by:
Jan Kiszka <jan.kiszka@siemens.com>
-
- 13 Apr, 2018 1 commit
-
-
Philippe Gerum authored
-
- 12 Apr, 2018 5 commits
-
-
Philippe Gerum authored
The tracepoint array must be initialized early in essence, when no real-time requirement exists yet. No need to pull such memory from the real-time allocator, malloc is fine.
-
Philippe Gerum authored
fusefs handlers are processed on behalf of the non-rt fusefs server thread, which never holds rt locks by design while issuing memory management calls (e.g. collect_wait_list()). So there is no need to pull memory from the private real-time allocator which storage may be limited, plain malloc is fine. At this chance, add a few missing __STD() notations to existing free() calls paired with __STD(malloc()).
-
Philippe Gerum authored
Obstacks are grown from handlers called by the fusefs server thread, which has no real-time requirement: malloc/free is fine for memory management.
-
Jan Kiszka authored
It's not really safe to allow a potentially modifying operation to be canceled in the middle, just dropping the lock during rollback. Better defer cancellation until the lock is dropped. Signed-off-by:
Jan Kiszka <jan.kiszka@siemens.com>
-
Jan Kiszka authored
This reverts commit 0f19393b. No reason was given back then, and we there is the risk that this code is used by prio-sensitive threads. Moreover, pvhash still used PI - unless it is mapped on plain hash in case of !CONFIG_XENO_PSHARED. Signed-off-by:
Jan Kiszka <jan.kiszka@siemens.com>
-
- 11 Apr, 2018 2 commits
-
-
Philippe Gerum authored
Caller of threadobj_wait_period() should receive -EWOULDBLOCK if threadobj_set_periodic() was not issued earlier.
-
Philippe Gerum authored
-
- 10 Apr, 2018 1 commit
-
-
Philippe Gerum authored
Wait for the idling interface rework from I-pipe/4.14, which will provide more information for determining whether Cobalt should be ok with entering the target idle state. As a result of this change, the original kernel behavior is restored for all ipipe-4.9.y patches with respect to entering an idle state, including for the releases lacking commits #89146106e8 or #8d3fa22c95. This change only affects kernels built with CONFIG_CPU_IDLE enabled. NOTE: XNIDLE is intentionally kept for future use in the Cobalt core.
-
- 08 Apr, 2018 1 commit
-
-
Philippe Gerum authored
-
- 05 Apr, 2018 1 commit
-
-
Philippe Gerum authored
-
- 03 Apr, 2018 1 commit
-
-
Philippe Gerum authored
-
- 01 Apr, 2018 2 commits
-
-
Philippe Gerum authored
The copy of the argument vector we build for preprocessing Xenomai options must have argc + 1 cells, with argv[argc] set to NULL [1]. See http://xenomai.org/pipermail/xenomai/2018-March/038593.html. [1] https://www.gnu.org/software/libc/manual/html_node/Program-Arguments.html
-
-
- 29 Mar, 2018 1 commit
-
-
Philippe Gerum authored
Timers may have specific CPU affinity requirements in that their backing clock device might not beat on all available CPUs, but only on a subset of them. The CPU affinity of every timer bound to a particular thread has to be tracked each time such timer is started, so that no core timer is queued to a per-CPU list which won't receive any event from the backing clock device. Such tracking was missing for timerfd and POSIX timers, along with internal timers running the sporadic scheduling policy. At this chance, the timer affinity code was cleaned up by folding all the affinity selection logic into a single call, i.e. xntimer_set_affinity().
-
- 20 Mar, 2018 1 commit
-
-
Philippe Gerum authored
Since kernel 4.9, the pipeline code may ask us whether it would be fine to enter the idle state on the current CPU, by mean of a probing hook called ipipe_enter_idle_hook(). Provide this hook, considering that absence of outstanding timers means idleness to us.
-
- 16 Mar, 2018 1 commit
-
-
Philippe Gerum authored
threadobj_unblock() simply does not work, dereferencing a NULL pointer whenever it actually manages to unblock a thread waiting on a synchronization object. Calling syncobj_flush() on this object to wake up waiters zeroes the wait_sobj field in the corresponding TCBs, so don't dereference thobj->wait_sobj past this point. Thread 1 "main" received signal SIGSEGV, Segmentation fault. 0x00007ffff79aeda0 in __syncobj_tag_unlocked (sobj=0x0) at include/copperplate/syncobj.h:100 100 assert(sobj->flags & SYNCOBJ_LOCKED); (gdb) bt
-
- 14 Mar, 2018 1 commit
-
-
Jan Kiszka authored
This both de-duplicates the code and ensures that all fields are zeroed prior to calling one of the actual tcb initialization functions. Specifically if host_task is not properly cleaned, we may cause a bug when using the field earlier, e.g. general protection fault: 0000 [#1] PREEMPT SMP [...] RIP: 0010:[<ffffffff81185a3c>] [<ffffffff81185a3c>] xnthread_host_pid+0x1c/0x30 [..] Call Trace: [<ffffffff8117c987>] trace_event_raw_event_cobalt_thread_set_current_prio+0x57/0xa0 [<ffffffff8117f33d>] xnsched_set_effective_priority+0x8d/0xc0 [<ffffffff8117a1e4>] xnsched_rt_setparam+0x14/0x30 [<ffffffff8117e700>] xnsched_set_policy+0xc0/0x170 [<ffffffff81185687>] __xnthread_init+0x317/0x3d0 [<ffffffff8114a3e8>] ? trace_buffer_unlock_commit+0x58/0x70 [<ffffffff811857bb>] xnthread_init+0x7b/0x110 Signed-off-by:
Jan Kiszka <jan.kiszka@siemens.com>
-
- 10 Mar, 2018 2 commits
-
-
Philippe Gerum authored
-
Philippe Gerum authored
-
- 08 Mar, 2018 10 commits
-
-
Philippe Gerum authored
-
Philippe Gerum authored
Restore the original sequence for deleting an internal service task that should most likely be waiting for an input event: first destroy the event resourcen, which should trigger immediate return from the wait call with -EIDRM, then send a cancellation request via rtdm_task_destroy() for good measure to exit the work loop if the task was not aslept on that event. This obviously assumes that all callers waiting for events do check for the return value, as they must do.
-
Philippe Gerum authored
The network stack stopped using netdev->last_rx a long time ago, and this field was removed during the 4.11 development cycle (#4a7c972644c1).
-
Philippe Gerum authored
This allows RTnet to define requests in the SIOCPROTOPRIVATE range for identifying device-specific features added to the converted NIC driver. Therefore, no excution mode is enforced by the base handler, the callee should check for the current mode, returning -ENOSYS to trigger the adaptive switch if required.
-
Philippe Gerum authored
Align on the signature of the regular ndo_do_ioctl() handler for interface-directed ioctl requests, since an ifr block must have been provided by the caller to determine the device to hand over the request to anyway.
-
Philippe Gerum authored
Assume this feature was originally provided by the regular driver converted to RTNet, which we may want to support thoroughly, including when tapping into the common PHY layer is required. To this end, we need to enter the ioctl handler from secondary mode only, which is not supposed to be an issue since there is no point is expecting ethertool requests to be part of the time-critical code anyway.
-
Philippe Gerum authored
No more in-tree users for those, in the wake of dropping the broken direct references from the kernel to user-space memory.
-
Philippe Gerum authored
-
Philippe Gerum authored
-