Tuesday, 11 January 2022

Nichestack OS tutorial 4 - Handling packets using the Device Driver

                                 Nichestack TCP/IP stack will also include the device drivers such as Ethernet, SLIP, PPP, PPPoE and Loopback. Each network interface will be associated with a network device structure. 

This network structure contains the  prepare function pointer. This prepare function will be called during the NicheStack initialization. There is another structure called NET structure which will contain all the device specific details. During the prepare function call, required parameters of the NET structure will be initialized. The NET structure also contains the function pointer to the driver specific routines and this will be assigned during this call. 

Next step is the initialization of devices and during this step, the stack will check whether the interface associated with the driver is OK and if it is working fine then change the device MIB status to UP.

After the driver initialization, the driver can send the packets using the packet send function. The packet will be send in the same order, which is provided to the driver. Once the packet is send, the driver will frees the packet. If the driver is busy, then the data to send will be queued and later when the driver is available, it will send the data.

Above paragraph describes the packet sending process, next we can dig into the packet receiving process. When the data is received by the device driver, the data must be stored in either chained or contiguous packet. Then the packet is placed in the stacks receive queue(rcvdq) and send the signal to the main task which is waiting for the incoming data. The signal will unlock the main task and dequeue the data and send to the upper layers based on the type of packet.

If the application layer task need to avoid the overhead during sending and receiving, then the task should use another mechanism called TCP-Zero-Copy. This feature can be used to avoid the overhead of having the stack copy data between application-owned buffers and stack-owned buffers.

We can stop the device driver using the close function. This will frees the memory resources associated with a driver and also changes the network interface status to DOWN.


Sunday, 9 January 2022

Nichestack OS tutorial 3 - Semaphores

                                In NicheStack, the semaphore acts as a signaling mechanism to notify a event occurrence. This notification will allow the waiting task to resume it's execution. Nichestack only supports the binary semaphore. Nichestack supports both signaling from ISR and task. 

Nichestack contains a main semaphore, which will be signaled while we receive any incoming data on the drivers. This will unlock the waiting semaphore inside the main task and thereby dequeues the incoming packet and based on the type it will send to relevant upper layers. The main semaphore will be created during the main task module initialization.

During signaling the task ID will be passed along with the semaphore ID and using this, the stack will understood the consumer task. Semaphore wait can be blocking or non-blocking based on the timeout value passed along with the semaphore ID. Unlike the mutexes, the semaphores are created during the module initialization.

Use case Diagram of the Interniche Semaphore Signaling





Tuesday, 4 January 2022

Nichestack OS tutorial 1 - Process of Starting a Module

Brief Introduction about NicheStack
                                                                Nichestack is primarily a TCP/IP stack with an inbuilt OS called Nichestack OS. Using the Nichestack OS, the TCP/IP stack can be directly run on a hardware or can be run on top of different third party RTOS. For this the Nichestack OS API should be mapped to the relevant RTOS API. There are different Nichestack ports already available including UCOS ii, FREE RTOS etc. Nichestack also supports another feature called Super-loop, particularly targeting the non-OS based systems. In Super-loop, there will be only one task with a single call-stack.
Nichestack was initially developed by Interniche technologies and it is taken over by HCC-Embedded and now by TUXERA.   
                                            
This post excludes the details about the Super-loop and Nichetsack OS on a third party RTOS.

Modules:
                This TCP/IP stack consists of different application layer protocols like FTP, HTTP etc. Each application layer protocol will be called as a "module"

Devices:
                There are different communication drivers present in the Nichestack such as Ethernet, PPP etc. These drivers associated with an interface is called as a "Device".

The below diagram describes the step by step process, while starting a module using Nichestack. Under the section initialize the modules, the socket section mentioned is with respect to server code.



Monday, 3 January 2022

Nichestack OS tutorial 2 - NET Resource method and Critical Section

There are two types of mutexes present in the Nichestack OS. They are: 

     1. NET Resource Method: This allows the programmer to obtain and release a mutex for accessing the shared resources. The API's used to obtain and release the mutexes are LOCK_NET_RESOURCE() and UNLOCK_NET_RESOURCE() respectively

In-order to use this mutex for a project we must create it first. Usually, the mutexes are created during the OS initialization phase by calling the mutex create API. All the mutexes used in a project will be available in the ipport.h file. Based on the maximum mutex number ID the mutexes will be created during the OS initialization phase.

The nichestack OS also has the TRY_NET_RESOURCE () and UNLOCK_NET_RESOURCE() locking mechanism. The difference between LOCK_NET_RESOURCE and TRY_NET_RESOURCE is the former will wait until we acquire the mutex, but the latter checks if we can get the mutex or not else skip.

Few mandatory mutexes present in the Nichestack OS are NET_RESID, RXQ_RESID and FREEQ_RESID. The NET_RESID must be obtained by the higher-level application while accessing the Sockets, TCP, UDP and the IP layers of the stack. RXQ_RESID must be obtained while accessing the data queue structure (getq() and putq()). FREEQ_RESID must be obtained while accessing the free packet buffer queue structure (PK_ALLOC() and PK_FREE()).

Points to consider while porting, if one task is obtained the NET_RESID mutex then the other task needs to wait till the first task release it. There won’t be any effect between different mutexes that means if one task is locking the NET_RESID then other task can obtain other mutexes except the NET_RESID mutex. If there is a case in which a task needs both the NET_RESID and FREE_RESID then NET_RESID should be locked first followed by FREE_RESID and for releasing FREE_RESID followed by NET_RESID mutex. Same will be applicable in the case of RX_RESID and FREE_RESID

The porting engineer can add a new mutex, if they want to protect the shared resources. One use case scenario for this, consider there are multiple client instances accessing the shared memory variables, then it must be protected using a mutex.

Never try to do nesting on same mutex calls. For example:

Task1()

{

LOCK_NET_RESOURCE(NET_RESID);

          ……………

          LOCK_NET_RESOURCE(NET_RESID);

}

Here we are trying to access the same           mutex twice in the task function.

Use case Scenario of NET Resource       method implementation:

Consider there are 2 tasks, FTP

Task and Telnet Task and the FTP task needs to get the received data on a blocking socket. The FTP task locked the NET_RESID mutex and because it is configured as blocking socket, it will wait till the data arrives, but the data is delaying for a long time, now inside the lower TCP layer the tcp_sleep() function will be automatically called and tcp_sleep() will release the NET_RESID lock from the FTP task and change the state to suspended. Also tcp_sleep() function will wait for the receive signal(SIGWAIT).
Now NET_RESID is free and the TELNET task needs to access the socket for getting its received data. If the telnet data is already available then read the data and does the processing and release the NET_RESID mutex, if not telnet task needs to follow the same steps like mentioned in the FTP case and release the mutex.

2. Critical Section Method: The difference between the Net resource method and      critical section is the latter will be used in the lower layers of the OS like getting and  putting the queue and also other task related functions(tk_XXX)

The API's used for the entering and exiting the critical sections are: ENTER_CRIT_SECTION() and EXIT_CRIT_SECTION() respectively While entering the critical section, all the interrupts will be disabled and enable it back while exiting the critical section.

The Nichestack OS and TCP/IP stack can run on top of a RTOS or directly on the hardware. If there is no RTOS and the ISR will not access any shared resource, then the Enter and Exit critical section API's were no-ops. If the ISR is accessing the shared resource, then the Critical section method should be used.

In the case of hard real time system projects care should be taken that the ISR will not access any shared resources. So while entering the critical section only other tasks needs to be waited not the ISR.

The entering and exiting process sometimes can be nested. For example:     

Func1()

{

          ENTER_CRIT_SECTION()

          …………

          EXIT_CRIT_SECTION ()

}

Func2()                                       

{

          ENTER_CRIT_SECTION()

          Func1();

          ……….

          EXIT_CRIT_SECTION ()

}


The main difference between the critical section and mutex is the former will completely disables all the interrupts but the latter only block other tasks to access the shared resource. The critical section method is mostly used in the low levels and due to the reason that it is disabling the interrupts, the execution should not delay too much.


Please check out this project to get more details about the implementation.

https://github.com/songwenshuai/NICHESTACK

Wednesday, 8 December 2021

PPP Server - Link Termination Phase

                                 Consider our device is PPP Server and other peer is PPP Client. There are 5 states present in the PPP communication. They are ESTABLISH, AUTHENTICATE, NETWORK, TERMINATE and DEAD states. If the client or server moved to the PPP Terminate State, then that device can no longer accept the LCP packets so re-start is not possible. We will discuss more about this later in this post. As the topic is about the Link termination phase we are not going into the details about the other states.

In PPP communication both the parties can initiate the Link(Connection) Termination. The link termination is achieved through the exchange of 2 messages - Terminate Request and Terminate Acknowledgement. 

If the client needs to terminate the connection, a terminate-REQ message is send and wait for the terminate-ACK message from the server. Upon receiving the ACK message the PPP state will be changed to TERMINATE and LCP state to CLOSED subsequently. Same will be applicable if the server initiates the link termination. In the server side, after sending the ACK message to the client, the server should wait until at least one restart timer has passed for changing the state to DEAD. After that if same or different client tries to re-connect, then the server will move from the DEAD state to ESTABLISH state upon receiving the LCP Config-REQ message from the Client.

If the client is initiating the link terminate, then server will move to DEAD state instead of TERMINATE state so, later it can re-establish another connection. But for the opposite scenario the state will be changed to TERMINATE and cannot re-establish the connection.

Monday, 8 March 2021

Big Endian Read and Write


/* This demo program shows how to read */
/* and write big-endian data  */

#include  stdio.h
#include  stdint.h

static inline uint32_t read_32bit_be( const uint8_t * const ptr_buf )
{
  uint32_t  byte0;
  uint32_t  byte1;
  uint32_t  byte2;
  uint32_t  byte3;

  byte0 = ptr_buf[0];
  byte1 = ptr_buf[1];
  byte2 = ptr_buf[2];
  byte3 = ptr_buf[3];

  byte0 <<= 24;
  byte1 <<= 16;
  byte2 <<= 8;

  return  ( byte0 | byte1 | byte2 | byte3 );
}

static inline void  write_32bit_be( uint8_t * ptr_buf, uint32_t value )
{
  ptr_buf[0] = value >> 24;
  ptr_buf[1] = value >> 16;
  ptr_buf[2] = value >> 8;
  ptr_buf[3] = value;
}

int main()
{
  uint8_t buffer[4];
  uint32_t val_32bit;

  val_32bit = UINT32_MAX;

  /*write the 32 bit value to the buffer*/
  write_32bit_be( buffer, val_32bit );

  val_32bit = 0;
  /*Retrieve the big endian value from the buffer*/
  val_32bit = read_32bit_be( buffer );

  printf( "Big-Endian value - %X", val_32bit );

  return 0;
}

Static and Dynamic Configuration in Embedded C Programming


/* This demo program shows the static and dynamic */
/* configuration with respect to IPv4 and IPv6 */
/* Compiler Used - Visual Studio */
#include stdio.h
#include stdint.h

/* IP_ENABLE == 0: none */
/* IP_ENABLE == 1: IPv4 */
/* IP_ENABLE == 2: IPv6 */
/* IP_ENABLE == 3: IPv4 and IPv6 */
#define IP_ENABLE     0

#define TRUE          1
#define FALSE         0

typedef struct
{
  int ipv4_enable;
  int ipv6_enable;
}ipconfig;

ipconfig ip_config;

static void set_ip_config( ipconfig * ptr_config )
{
  ip_config.ipv4_enable = ptr_config->ipv4_enable;
  ip_config.ipv6_enable = ptr_config->ipv6_enable;
}

int main()
{
  ipconfig ip; /*for getting the input*/
  /*Static configuration*/
#if ( IP_ENABLE == 0 )
  ip_config.ipv4_enable = FALSE;
  ip_config.ipv6_enable = FALSE;
#elif ( IP_ENABLE == 1 )
  ip_config.ipv4_enable = TRUE;
  ip_config.ipv6_enable = FALSE;
#elif ( IP_ENABLE == 2 )
  ip_config.ipv4_enable = FALSE;
  ip_config.ipv6_enable = TRUE;
#elif ( IP_ENABLE == 3 )
  ip_config.ipv4_enable = TRUE;
  ip_config.ipv6_enable = TRUE;
#endif

  printf( "IPv4: %d\t", ip_config.ipv4_enable );
  printf( "IPv6: %d\n", ip_config.ipv6_enable );

  /*Dynamic Configuration*/
  (void)scanf( "%d", &(ip.ipv4_enable) );
  (void)scanf( "%d", &(ip.ipv6_enable) );

  set_ip_config( &ip );

  printf( "IPv4: %d\t", ip_config.ipv4_enable );
  printf( "IPv6: %d\n", ip_config.ipv6_enable );

  if ( ip_config.ipv4_enable == TRUE )
  {
    /*Get the IPv4 address from the DHCP server*/
  }

  if ( ip_config.ipv6_enable == TRUE )
  {
    /*Get the IPv6 address from the DHCPv6 server*/
  }
  
  return 0;
}