This feature made diskless boot support painless. The accommodation of three different OS conventions Domain, BSD, and SysV was handled in filename space, as described, using environmental variables which the filesystem used in creating the actual names accessed. Or, you could easily add a feature to the packaging system to install the proper binary for the correct architecture and not waste disk space on other unused arch binaries.
Your script would fail in certain scenarios. For example, running a x86 binary on an amd64 system. My initial reaction was the same: that we already have multi-file packages, so isn't it more natural just to have a binary for each architecture?
So placing the burden of choosing one on user space is wrong. And files are user space things; the kernel should not navigate directories.
Multi-arch binaries are not tremendously useful. Multi-arch libraries are very useful. Yes, directories once again could be used, but various "standards" groups have already agreed on a defacto lib vs lib64 multi-arch setup which totally falls apart in the face of anything besides a single pair bit and bit architectures. I'd much rather have just seen the platform encoded in the library sonames and filenames.
Oddly enough, though, multi-arch executables are actually a better solution than directories, because the question comes down to which directory to search for executables. Granted, I don't find multi-arch binaries particularly useful, so I have no problem with packages or installers just figuring out which binary to install. However, people who use NFS-mounted root directories across a variety of systems could get a big boost out of something like fatELF. Less maintenance and all that jazz. All at the cost of a little extra disk space on a server and a little bigger packages to download on the 50mpbs pipes you can get for cheap these days.
The installer can just install the proper binaries based on the target architecture. An installer shell script can pick which installer binary to run or better yet, the Linux software distribution scene could get its head out of its ass and supply a standard distro-neutral installer framework that's installed in the base package set for every distro like how it should've been done 15 years ago. What's the problem again? However, for multi-arch systems this might be useful, and I suppose that in the days when I had a mix of architectures it would have been nice to be able to install FatElf software to a shared network drive and have it just work.
Plus for installers this sort of thing could be handy. But since most software that I install comes from a repository, a fat package would work just as well as fat binaries. One of the major pains of cross-compiling is getting all the library paths sorted - which ones do your tools use vs which ones will what you're building with those tools use, etc.
Relatedly, if compilers and linkers are FatELF-aware, they can build fatELFs a bit faster because they only have to do the parsing once, and then object code for all the platforms from the same resultant parse tree. No, the biggest problem is that this "solution" will require all of the development tools to be redesigned for dubious benefit. Trivial, and only objcopy needs extension as it has been, IIRC. Then again, I also believe in link time optimization for gcc and the tooth fairy.
User: Password:. FatELF: universal binaries for Linux. Did you know? October 28, This article was contributed by Koen Vervloesem. Distributions no longer need to have separate downloads for various platforms.
You can remove all the confusing text from your website about "which installer is right for me? I have a long list of things that Linux should blatantly steal from Mac OS X, and given infinite time, I'll implement them all. FatELF happens to be something on that list that is directly useful to my work as a game developer that also happens to be a simple project. I think the changes required to the system are pretty small for what could be good benefits to Unix as a whole.
I was so intimidated by the kernel mailing list, that I spent a disproportionate amount of time researching etiquette, culture, procedure. I didn't want to offend anyone or waste their time.
The idea seem interesting, but does it need to be ELF-specific? What about making the executable a simple archive file format possibly just an "ar" archive? The archive file format would be implemented as its own binfmt , and the internal executables could be arbitrary other executables. The outer loader would just try executing each executable until one works or it runs out. Any open source driver should be encouraged to be merged with mainline Linux so there's no need to distribute them separately.
I've only submitted the kernel patches. If the kernel community is ultimately uninterested, there's not much point in bothering the binutils people. The patches for all the other parts are sitting in my Mercurial repository. Sugar Apple Board of Directors. Mac OS development.
Explore Wikis Community Central. Register Don't have an account? Universal binary. History Talk 0. Back in the 32bit days, when DOS was still behind the covers of Windows, you could use djgpp to create binaries that would run on Linux as well as Windows. I know, because I built some. It was BitRock Installer. Somehow you can specify an internal format inside of a EXE. I have some Linux native. Or someone was being cute and gave it a different extension and lied to me.
I guess it was not with PE binaries and possibly not even bit. A utility I know of that works like that is bootlace.
0コメント