Tuesday, 4 January 2022
Nichestack OS tutorial 1 - Process of Starting a Module
Monday, 3 January 2022
Nichestack OS tutorial 2 - NET Resource method and Critical Section
There are two types of mutexes present in the Nichestack OS. They are:
1. NET Resource Method: This allows the programmer
to obtain and release a mutex for accessing the shared resources. The API's
used to obtain and release the mutexes are LOCK_NET_RESOURCE() and UNLOCK_NET_RESOURCE() respectively
In-order to use this
mutex for a project we must create it first. Usually, the mutexes are created
during the OS initialization phase by calling the mutex create API. All the
mutexes used in a project will be available in the ipport.h file. Based on the maximum
mutex number ID the mutexes will be created during the OS initialization phase.
The nichestack OS also
has the TRY_NET_RESOURCE () and UNLOCK_NET_RESOURCE() locking
mechanism. The difference between LOCK_NET_RESOURCE and TRY_NET_RESOURCE
is the former will wait until we acquire the mutex, but the latter checks if we
can get the mutex or not else skip.
Few mandatory mutexes
present in the Nichestack OS are NET_RESID, RXQ_RESID and FREEQ_RESID. The
NET_RESID must be obtained by the higher-level application while accessing the
Sockets, TCP, UDP and the IP layers of the stack. RXQ_RESID must be obtained while
accessing the data queue structure (getq() and putq()). FREEQ_RESID must be
obtained while accessing the free packet buffer queue structure (PK_ALLOC() and
PK_FREE()).
Points to consider
while porting, if one task is obtained the NET_RESID mutex then the other task
needs to wait till the first task release it. There won’t be any effect between
different mutexes that means if one task is locking the NET_RESID then other
task can obtain other mutexes except the NET_RESID mutex. If there is a case in
which a task needs both the NET_RESID and FREE_RESID then NET_RESID should be locked
first followed by FREE_RESID and for releasing FREE_RESID followed by NET_RESID
mutex. Same will be applicable in the case of RX_RESID and FREE_RESID
The porting engineer
can add a new mutex, if they want to protect the shared resources. One use case
scenario for this, consider there are multiple client instances accessing the
shared memory variables, then it must be protected using a mutex.
Never try to do nesting on same mutex
calls. For example:
Task1()
{
LOCK_NET_RESOURCE(NET_RESID);
……………
LOCK_NET_RESOURCE(NET_RESID);
}
Here we are trying to
access the same mutex twice in the task function.
Use case Scenario of NET Resource method implementation:
Consider there are 2 tasks, FTP
The API's used for the entering and exiting the critical sections are: ENTER_CRIT_SECTION() and EXIT_CRIT_SECTION() respectively
While entering the critical section, all the interrupts will be disabled and
enable it back while exiting the critical section.
The Nichestack OS and
TCP/IP stack can run on top of a RTOS or directly on the hardware. If there is
no RTOS and the ISR will not access any shared resource, then the Enter and
Exit critical section API's were no-ops. If the ISR is accessing the shared
resource, then the Critical section method should be used.
In the case of hard
real time system projects care should be taken that the ISR will not access any
shared resources. So while entering the critical section only other tasks needs
to be waited not the ISR.
The entering and exiting process sometimes can be nested. For
example:
Func1()
{
ENTER_CRIT_SECTION()
…………
EXIT_CRIT_SECTION ()
}
Func2()
{
ENTER_CRIT_SECTION()
Func1();
……….
EXIT_CRIT_SECTION ()
}
The main difference between the critical section and mutex is the former will completely disables all the interrupts but the latter only block other tasks to access the shared resource. The critical section method is mostly used in the low levels and due to the reason that it is disabling the interrupts, the execution should not delay too much.
Please check out this project to get more details about the implementation.
Wednesday, 8 December 2021
PPP Server - Link Termination Phase
Consider our device is PPP Server and other peer is PPP Client. There are 5 states present in the PPP communication. They are ESTABLISH, AUTHENTICATE, NETWORK, TERMINATE and DEAD states. If the client or server moved to the PPP Terminate State, then that device can no longer accept the LCP packets so re-start is not possible. We will discuss more about this later in this post. As the topic is about the Link termination phase we are not going into the details about the other states.
In PPP communication both the parties can initiate the Link(Connection) Termination. The link termination is achieved through the exchange of 2 messages - Terminate Request and Terminate Acknowledgement.
If the client needs to terminate the connection, a terminate-REQ message is send and wait for the terminate-ACK message from the server. Upon receiving the ACK message the PPP state will be changed to TERMINATE and LCP state to CLOSED subsequently. Same will be applicable if the server initiates the link termination. In the server side, after sending the ACK message to the client, the server should wait until at least one restart timer has passed for changing the state to DEAD. After that if same or different client tries to re-connect, then the server will move from the DEAD state to ESTABLISH state upon receiving the LCP Config-REQ message from the Client.
If the client is initiating the link terminate, then server will move to DEAD state instead of TERMINATE state so, later it can re-establish another connection. But for the opposite scenario the state will be changed to TERMINATE and cannot re-establish the connection.
Monday, 8 March 2021
Big Endian Read and Write
/* This demo program shows how to read */
/* and write big-endian data */
#include stdio.h
#include stdint.h
static inline uint32_t read_32bit_be( const uint8_t * const ptr_buf )
{
uint32_t byte0;
uint32_t byte1;
uint32_t byte2;
uint32_t byte3;
byte0 = ptr_buf[0];
byte1 = ptr_buf[1];
byte2 = ptr_buf[2];
byte3 = ptr_buf[3];
byte0 <<= 24;
byte1 <<= 16;
byte2 <<= 8;
return ( byte0 | byte1 | byte2 | byte3 );
}
static inline void write_32bit_be( uint8_t * ptr_buf, uint32_t value )
{
ptr_buf[0] = value >> 24;
ptr_buf[1] = value >> 16;
ptr_buf[2] = value >> 8;
ptr_buf[3] = value;
}
int main()
{
uint8_t buffer[4];
uint32_t val_32bit;
val_32bit = UINT32_MAX;
/*write the 32 bit value to the buffer*/
write_32bit_be( buffer, val_32bit );
val_32bit = 0;
/*Retrieve the big endian value from the buffer*/
val_32bit = read_32bit_be( buffer );
printf( "Big-Endian value - %X", val_32bit );
return 0;
}
Static and Dynamic Configuration in Embedded C Programming
/* This demo program shows the static and dynamic */
/* configuration with respect to IPv4 and IPv6 */
/* Compiler Used - Visual Studio */
#include stdio.h
#include stdint.h
/* IP_ENABLE == 0: none */
/* IP_ENABLE == 1: IPv4 */
/* IP_ENABLE == 2: IPv6 */
/* IP_ENABLE == 3: IPv4 and IPv6 */
#define IP_ENABLE 0
#define TRUE 1
#define FALSE 0
typedef struct
{
int ipv4_enable;
int ipv6_enable;
}ipconfig;
ipconfig ip_config;
static void set_ip_config( ipconfig * ptr_config )
{
ip_config.ipv4_enable = ptr_config->ipv4_enable;
ip_config.ipv6_enable = ptr_config->ipv6_enable;
}
int main()
{
ipconfig ip; /*for getting the input*/
/*Static configuration*/
#if ( IP_ENABLE == 0 )
ip_config.ipv4_enable = FALSE;
ip_config.ipv6_enable = FALSE;
#elif ( IP_ENABLE == 1 )
ip_config.ipv4_enable = TRUE;
ip_config.ipv6_enable = FALSE;
#elif ( IP_ENABLE == 2 )
ip_config.ipv4_enable = FALSE;
ip_config.ipv6_enable = TRUE;
#elif ( IP_ENABLE == 3 )
ip_config.ipv4_enable = TRUE;
ip_config.ipv6_enable = TRUE;
#endif
printf( "IPv4: %d\t", ip_config.ipv4_enable );
printf( "IPv6: %d\n", ip_config.ipv6_enable );
/*Dynamic Configuration*/
(void)scanf( "%d", &(ip.ipv4_enable) );
(void)scanf( "%d", &(ip.ipv6_enable) );
set_ip_config( &ip );
printf( "IPv4: %d\t", ip_config.ipv4_enable );
printf( "IPv6: %d\n", ip_config.ipv6_enable );
if ( ip_config.ipv4_enable == TRUE )
{
/*Get the IPv4 address from the DHCP server*/
}
if ( ip_config.ipv6_enable == TRUE )
{
/*Get the IPv6 address from the DHCPv6 server*/
}
return 0;
}
Friday, 5 March 2021
Point to Point Protocol Communication(PPP)
PPP(Point to Point Protocol) is a layer 2 protocol(Datalink), commonly used for the communication between switches, routers etc. PPP can communicate over Serial link(PPPd) or Ethernet(PPPoE) or USB CDC ACM. The PPP peers should pass through different phases to establish the connection. These phases are-
- LCP (Link Control Protocol) -
- Authentication(PAP, CHAP...) -
- PAP - User can select either of these authentication protocols to use in their PPP communication but comparing PAP with CHAP, CHAP will provide more security. In the PAP authentication the peer will send the username and password to the other side. Then the other side will verify the username and password with its database. If verification is successful then the access will be granted. This PAP authentication process can be done in both ways.
- CHAP - I mentioned earlier that the CHAP authentication is more secure compared to PAP and the reason is, in the CHAP password will not send through network to the other side. During the CHAP communication the peer will receive a challenge message from other side and using the challenge message and the stored password the peer will create a hash value. This hash value will be send to the other side and in the other side the same process(password + challenge message) will be done to generate the hash value. This hash value will be compared with the received hash value from the peer, if both matches then the authentication is success. Like PAP, CHAP can also be done in both ways. Below wireshark packet shows 2 way CHAP authentication.
- NCP(Network Control Protocol) or IPCP(IP Control Protocol
sudo pppd -detach lock /dev/ttyAMA0 115200 debug auth dump record client.pcap +chap local noipdefault defaultroute 0.0.0.0:0.0.0.0
or reload the browser
sudo pppd -detach lock 192.168.151.101:192.168.151.203 /dev/ttyAMA0 115200 debug auth local dump record server.pcap +chap
You can find more information about these commands from this website - PPPd commands
Monday, 15 February 2021
Ways to Obtain DNS Server Address in IPv6 Client
I assume you had a brief idea about DHCPv6 and ICMPv6.
DHCPv6 Client will usually obtain the dynamic IP address through 4 messages. Those messages are SOLICIT, ADVERTISE, REQUEST and REPLY. So during these message transaction the DHCPv6 client can also request the DNS server address using the ORO(Option Request Option) option. If the client request is accepted by the server then reply packet will contains the DNS Server address. Like wise the client can also request other ORO options.
The IPv6 client can also obtain the dynamic IPv6 address using other mode called SLAAC mode(Stateless Auto Address Configuration). The IPv6 client will usually perform the SLAAC mode when the managed bit flag is not set in the received router advertisement(RA) message whereas if the managed bit is set then client will use DHCPv6 client. In the SLAAC mode itself there are 2 ways to obtain the DNS Server address. First one is through RA message itself and it is called as RDNSS and other method will be used when the received ICMP RA message contains the other bit flag set. If the other bit flag is set then DHCPv6 client will be used for getting the additional information's like DNS Server address.
The final method is of-course the static configuration of the DNS Server Address.


