There are two types of mutexes present in the Nichestack OS. They are:
1. NET Resource Method: This allows the programmer
to obtain and release a mutex for accessing the shared resources. The API's
used to obtain and release the mutexes are LOCK_NET_RESOURCE() and UNLOCK_NET_RESOURCE() respectively
In-order to use this
mutex for a project we must create it first. Usually, the mutexes are created
during the OS initialization phase by calling the mutex create API. All the
mutexes used in a project will be available in the ipport.h file. Based on the maximum
mutex number ID the mutexes will be created during the OS initialization phase.
The nichestack OS also
has the TRY_NET_RESOURCE () and UNLOCK_NET_RESOURCE() locking
mechanism. The difference between LOCK_NET_RESOURCE and TRY_NET_RESOURCE
is the former will wait until we acquire the mutex, but the latter checks if we
can get the mutex or not else skip.
Few mandatory mutexes
present in the Nichestack OS are NET_RESID, RXQ_RESID and FREEQ_RESID. The
NET_RESID must be obtained by the higher-level application while accessing the
Sockets, TCP, UDP and the IP layers of the stack. RXQ_RESID must be obtained while
accessing the data queue structure (getq() and putq()). FREEQ_RESID must be
obtained while accessing the free packet buffer queue structure (PK_ALLOC() and
PK_FREE()).
Points to consider
while porting, if one task is obtained the NET_RESID mutex then the other task
needs to wait till the first task release it. There won’t be any effect between
different mutexes that means if one task is locking the NET_RESID then other
task can obtain other mutexes except the NET_RESID mutex. If there is a case in
which a task needs both the NET_RESID and FREE_RESID then NET_RESID should be locked
first followed by FREE_RESID and for releasing FREE_RESID followed by NET_RESID
mutex. Same will be applicable in the case of RX_RESID and FREE_RESID
The porting engineer
can add a new mutex, if they want to protect the shared resources. One use case
scenario for this, consider there are multiple client instances accessing the
shared memory variables, then it must be protected using a mutex.
Never try to do nesting on same mutex
calls. For example:
Task1()
{
LOCK_NET_RESOURCE(NET_RESID);
……………
LOCK_NET_RESOURCE(NET_RESID);
}
Here we are trying to
access the same mutex twice in the task function.
Use case Scenario of NET Resource method implementation:
Consider there are 2 tasks, FTP
The API's used for the entering and exiting the critical sections are: ENTER_CRIT_SECTION() and EXIT_CRIT_SECTION() respectively
While entering the critical section, all the interrupts will be disabled and
enable it back while exiting the critical section.
The Nichestack OS and
TCP/IP stack can run on top of a RTOS or directly on the hardware. If there is
no RTOS and the ISR will not access any shared resource, then the Enter and
Exit critical section API's were no-ops. If the ISR is accessing the shared
resource, then the Critical section method should be used.
In the case of hard
real time system projects care should be taken that the ISR will not access any
shared resources. So while entering the critical section only other tasks needs
to be waited not the ISR.
The entering and exiting process sometimes can be nested. For
example:
Func1()
{
ENTER_CRIT_SECTION()
…………
EXIT_CRIT_SECTION ()
}
Func2()
{
ENTER_CRIT_SECTION()
Func1();
……….
EXIT_CRIT_SECTION ()
}
The main difference between the critical section and mutex is the former will completely disables all the interrupts but the latter only block other tasks to access the shared resource. The critical section method is mostly used in the low levels and due to the reason that it is disabling the interrupts, the execution should not delay too much.
Please check out this project to get more details about the implementation.
No comments:
Post a Comment