Wednesday, April 10, 2013


A Good way to Allow C++ code 


One of the things I usually take care of as a Gentoo packages maintainer is sending patches to upstream developers. If a patch is applied upstream, we can remove it from future versions of a package so we have less work to do to maintain the package. Unfortunately, it seems that other distributions and packagers don't always do the same. This is true not only for Linux distributions such as Debian, Fedora Core, and SUSE, but also for maintainers of packages in places like FreeBSD's Ports, DarwinPorts or Fink. Here are some tips for developers on making things easier for yourself and everyone who has to touch your code.
When upstream developers are unaware of the problems their software has on platforms they can't test (perhaps because they use another distribution, another environment, another operating system, or another hardware platform) they can't fix them and are likely to introduce more problems in further versions, if they assume that things are good as they are. Letting upstream developers know of the problems, filing bugs, and in general reporting problems is one of the best ways to help an open source project, and it's something even users with no technical skills can do.
When you have the technical ability to fix a bug, though, you should try to provide to the upstream developers a patch. However, not all patches can be applied unconditionally upstream, and that makes it harder to fix a problem in the short term. Some of the errors people make while preparing a patch and sending it upstream can be fixed in a reasonably simple way, but I still see bad patches in many packaging systems (including Gentoo on occasion).
The first thing to take into account when writing a patch is that the environment in which you're working can be different from other environments. By "environment" I mean all the factors that can affect the behaviour of a program, such as the operating system and its version; the distribution, if an operating system has more than one; the drivers used when there is more than one kind; the version of the libraries or of the tools; and so on. One of the most common assumptions developers make while creating patches is that the environment they're using is the "right" one and everything else should follow it; however, even when the environment used is the "right one," a patch should always be general enough to be applicable also to "broken" environments.
Using GNU's Autotools is a good way to allow a C/C++ program to adapt itself to multiple environments. Although they have many problems, and are hated by most of their users, Autotools are currently the best way to handle a multi-platform, adaptable building system. They have facilities to check for headers, libraries, functions, and a lot more. Unfortunately Autotools are quite difficult to learn, and I'm not going into details on how to use them or how to write macros in Autotools' M4 language, but here's a brief overview.
In an Autotools-based project, you usually write a configure.ac script in M4, which defines the checks needed on the host machine (the machine which where the build is going to be done); this script is then translated into a shell script by autoconf. You also write makefile.am files which are used by automake to create the templates for makefiles that are created by the configure script after the checks. Usually the configure script can define conditionals for makefiles and create a config.h file where you can find some C preprocessor's macros useful to create conditional code for a C or C++ source file (using #ifdefs).
For instance, you can't always suppose that a given header is there, even if it's part of a standard defined for Unix systems. Libraries change, including system libraries, and something you're writing now can change in the future. Try to make sure that all the system and non-conventional headers you use are present before using them. You can usually do that with an AC_CHECK_HEADERS() call in configure.ac and using the config.h generated on the host machine to check whether they are present. Sometimes you get warnings during configure execution, stating that a given header "can be preprocessed but fails to compile": while this is a warning at this time, it is going to be an error in the future, so try to fix it as soon as you can. The fix usually involves adding a couple of needed headers in the prerequisite argument. With this check, for example, you can usually avoid the infamous error of malloc.h on FreeBSD systems (however, malloc.h is actually deprecated; you can use stdlib.h in its place without problems).
System headers aren't portable, so you should always avoid them if you're not going to use a kernel service you can't get in other ways. Try instead to get the information or the services you need; it's usually more portable also between different version of the same operating system. It's not unlikely that different Linux (kernel) versions have incompatible headers that can make some software fail to work.
Libraries, too, can be a problem. Glibc, used by every Linux distribution (apart from the ones for embedded usage, which normally use uclibc or dietlibc), provides not only the base functions a normal Clibrary should provide (for example, to interact with kernel) but also provides a complete iconv() implementation, basic gettext implementation, and a getopt_long() function used to get long parameters when a program uses getopt to parse the arguments given to it by the user. However, other system libraries, like FreeBSD's, don't provide all that; iconv() is provided on FreeBSD systems by GNU libiconv, while gettext is entirely provided by GNU gettext; for getopt_long(), you need another library just on 4.x series of FreeBSD, on DragonFly BSD, and probably on other non-Linux systems. Autotools provide the AC_CHECK_LIB() macro to check for presence of a given function in a library, allowing the packages to check if they must link to libiconv, libdl, libintl, and so on.
Size of basic types, like integer sizes, can vary among hardware platforms, as can pointer size. This problem is getting bigger lately, as x86_64 systems can now be considered the first 64-bit hardware platform for mainstream desktop users. Having to care about 64-bit cleanness can be a bit tricky for software developed assuming the usual x86 platform, but the fixes are usually easy enough to make in a couple of minutes. One of the most important ways to ensure that the variables' sizes are right is to use the "standardized integers" which can be found on sys/types.h or stdint.h headers. These are renamed integer types, with a name in the form (u)intSIZE_t -- so you have int8_t for 8-bit signed integers and uint64_t for 64-bit unsigned integers. Pointers can't use those fixed-length types, as they depend on the architecture; they usually have the same length as long integers, but if you can, try ptrdiff_t before using long to store them. Other types, such as size_t and off_t, have their dimension fixed in all the platforms, so they are safe to be used as they are. You should never mix fixed-length and "named" types like unsigned int and long, and you should be sure that you're using the right %-code when passing integers to printf or scanf. You can usually find macros defined that allow you to get the right code with the size, so PRId64 would be the equivalent of %d for 64-bit integers, while PRIu32 would be the equivalent of %u for 32-bit integers.
Assembler code has also been a problem lately. Before the introduction of x86_64 processors every hardware platform had its own assembler code, so there weren't too many problems porting it. It was just a matter of having a non-assembler version of the code, which was maybe slower but portable. With the new architecture, instead, you can have assembler code shared between x86 and x86_64 systems, and this is especially true for multimedia applications that make use of extended instructions like MMX, SSE, or 3DNow!. Although the syntax of assembler is usually the same, there are a few things to think about, such as the size of the registers and the operations run on them. Using "base" operations on 32-bit registers on an x86_64 processor will fail, so you usually have to select the register to use as operands with conditionals.
The discussion about hardware compatibility is of course longer than this and deserves a book of its own, as there are a lot of other tricky conditions to take care of. Those working on non-x86 architectures already know how to fix eventual problems and are likely to provide patches in cases where the software is broken.
Returning to software compatibility issues: you should take into account the possibility of crosscompiling the software. Letting Autotools-based projects build in crosscompile is usually an easy task, if you have all the dependencies already crosscompiled (and, obviously, you have a working crosscompiler for the target architecture). Unfortunately, some configure scripts are broken by design and fail to work as they should in crosscompile. The most common error is to use the output of the uname command to select the architecture or platform to compile for; this will tell you about the current system, not the target platform. Instead of using that, you can rely on ${host} and ${target} variables, which contains a CHOST-like string defining the host system (the one you're compiling on) and the target one (the one you're compiling for). The CHOST-like strings are structured as tuples, either three components (arch-vendor-os) or four (arch-vendor-kernel-libc), although the latter is used only when the operating system has no one "native" libc. The arch part of the string defines the hardware platform used: i386, i686, x86_64, ppc, ppc64, sparc, and so on; the vendor part used to refer to the hardware vendor (for example, ibm for their mainframes) but lately is being used to refer to the software vendor, so you can find there redhat, debian, mandrake, gentoo (note that Gentoo uses it only on uclibc and Gentoo/*BSD systems) and others, but usually it's just "pc" for x86 and "unknown" for other architectures. The os part is the most tricky one, as it defines the operating system used. Usually it's "linux-gnu" for GNU/Linux systems, but it can be "linux-uclibc" or just "linux" for other Linux-based systems, or can be "freebsdX.Y" to refer to a given version of FreeBSD (just "freebsd" is not a valid value). Generally, this value is what you use to check which OS-specific code to enable.
I hope that this introductory article will inspire patch authors to ensure that the changes they make are not going to break other systems. By using the right combination of Autotools and conditional compilation to check for the right environment where the patch is needed, developers can allow upstream maintainers to fix their software, saving maintainers from having to reinvent the wheel every time something in the "common" environment changes.

Tuesday, April 9, 2013

LTE : Long Term Evolution 

* Latest Technology Entry * shall call it ? 

LTE is a wireless broadband Technology. It support for a speed internet usage by cell phones and handheld devices. This also called as 4G Technology.LTE is part of the GSM evolutionary path for mobile broadband, following EDGE,UMTSHSPA (HSDPA and HSUPA combined) and HSPA Evolution (HSPA+).  Although HSPA and its evolution are strongly positioned to be the dominant mobile data technology for the next decade, the 3GPP family of standards must evolve toward the future. HSPA+ will provide the stepping-stone to LTE for many operators.

The overall objective for LTE is to provide an extremely high performance radio-access technology that offers full vehicular speed mobility and that can readily coexist with HSPA and earlier networks. Because of scalable bandwidth, operators will be able to easily migrate their networks and users from HSPA to LTE over time.

LTE assumes a full Internet Protocol (IP) network architecture and is designed to support voice in the packet domain. It incorporates top-of-the-line radio techniques to achieve performance levels beyond what will be practical with CDMA approaches, particularly in larger channel bandwidths. However, in the same way that 3G coexists with second generation (2G) systems in integrated networks, LTE systems will coexist with 3G and 2G systems. Multimode devices will function across LTE/3G or even LTE/3G/2G, depending on market circumstances.

Standards development for LTE continued with 3GPP Release 9 (Rel-9), which was functionally frozen in December 2009.  3GPP Rel-9 focuses on enhancements to HSPA+ and LTE while Rel-10 focuses on the next generation of LTE for the International Telecommunication Union’s (ITU) IMT-Advanced requirements and both were developed nearly simultaneously by 3GPP standards working groups. Several milestones have been achieved by vendors in recent years for both Rel-9 and Rel-10. Most significant was the final ratification by the ITU of LTE-Advanced (Rel-10) as IMT-Advanced in November 2010.
 
The first commercial LTE networks were launched by TeliaSonera in Norway and Sweden in December 2009; as of November 2012, there were 117 commercial LTE networks in various stages of commercial service. Many trials are underway with up to 130 LTE deployments expected in 2012.
 
For many years now, a true world cellular standard has been one of the industry’s goals. GSM dominated 2G technologies but there was still fragmentation with CDMA and TDMA as well as iDEN. With the move to 3G, nearly all TDMA operators migrated to the 3GPP technology path. Yet the historical divide remained between GSM and CDMA.  It is with the next step of technology evolution that the opportunity has arisen for a global standard technology. Many operators have converged on the technology they believe will offer them and their customers the most benefits. That technology is Long Term Evolution.  Most leading operators, device and infrastructure manufacturers, as well as content providers support LTE as the mobile technology of the future. Operators, including leading GSM-HSPA and CDMA EV-DO operators as well as newly licensed and WiMAX operators, are making strategic, long-term commitments to LTE networks. All roads lead to LTE.

In June of 2008, the Next Generation Mobile Networks Alliance (NGMN) selected LTE as the first technology that matched its requirements successfully. 4G Americas, GSMA, UMTS Forum, and other global organizations have reiterated their support of the 3GPP evolution to LTE.  Additionally, the LSTI Trial Initiative has provided support through early co-development and testing of the entire ecosystem from chipset, device and infrastructure vendors.

LTE products have been tested, trialed and commercially announced in the market by manufacturers that are already part of a well-planned LTE eco-system. The LTE ecosystem will build upon the economies of scope and scale of the entire 3GPP family of technologies.
 
LTE uses Orthogonal Frequency Division Multiple Access (OFDMA) on the downlink, which is well suited to achieve high peak data rates in high spectrum bandwidth. WCDMA radio technology is, essentially, as efficient as Orthogonal Frequency Division Multiplexing (OFDM) for delivering peak data rates of about 10 Mbps in 5 MHz of bandwidth. Achieving peak rates in the 100 Mbps range with wider radio channels, however, would result in highly complex terminals and is not practical with current technology. This is where OFDM provides a practical implementation advantage.

The OFDMA approach is also highly flexible in channelization, and LTE will operate in various radio channel sizes ranging from 1.4 to 20 MHz. LTE also boosts spectral efficiency.

On the uplink, however, a pure OFDMA approach results in high Peak to Average Ratio (PAR) of the signal, which compromises power efficiency and, ultimately, battery life. Hence, LTE uses an approach for the uplink called Single Carrier FDMA (SC-FDMA), which is somewhat similar to OFDMA, but has a 2 to 6 dB PAR advantage over the OFDMA method used by other technologies such as WiMAX IEEE 802.16e.

LTE capabilities include:
  • Downlink peak data rates up to 326 Mbps with 20 MHz bandwidth
  • Uplink peak data rates up to 86.4 Mbps with 20 MHz bandwidth
  • Operation in both TDD and FDD modes
  • Scalable bandwidth up to 20 MHz, covering 1.4 MHz, 3 MHz, 5 MHz, 10 MHz, 15 MHz, and 20 MHz in the study phase
  • Increased spectral efficiency over Release 6 HSPA by two to four times
  • Reduced latency, up to 10 milliseconds (ms) round-trip times between user equipment and the base station, and to less than 100 ms transition times from inactive to active