aboutsummaryrefslogtreecommitdiff
blob: 881d7d2046e0c126c5d8961f7d4789ed1ad3fb90 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
Open jobs for finishing GNU libc:
---------------------------------
Status: September 2002

If you have time and talent to take over any of the jobs below please
contact <bug-glibc@gnu.org>.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

[ 1] Port to new platforms or test current version on formerly supported
     platforms.

**** See http://www.gnu.org/software/libc/porting.html for more details.


[ 2] Test compliance with standards.  If you have access to recent
     standards (IEEE, ISO, ANSI, X/Open, ...) and/or test suites you
     could do some checks as the goal is to be compliant with all
     standards if they do not contradict each other.


[ 3] The IMHO opinion most important task is to write a more complete
     test suite.  We cannot get too many people working on this.  It is
     not difficult to write a test, find a definition of the function
     which I normally can provide, if necessary, and start writing tests
     to test for compliance.  Beside this, take a look at the sources
     and write tests which in total test as many paths of execution as
     possible.


[ 4] Write translations for the GNU libc message for the so far
     unsupported languages.  GNU libc is fully internationalized and
     users can immediately benefit from this.

     Take a look at the matrix in
	ftp://ftp.gnu.org/pub/gnu/ABOUT-NLS
     for the current status (of course better use a mirror of ftp.gnu.org).


[ 8] If you enjoy assembler programming (as I do --drepper :-) you might
     be interested in writing optimized versions for some functions.
     Especially the string handling functions can be optimized a lot.

     Take a look at

	Faster String Functions
	Henry Spencer, University of Toronto
	Usenix Winter '92, pp. 419--428

     or just ask.  Currently mostly i?86 and Alpha optimized versions
     exist.  Please ask before working on this to avoid duplicate
     work.


[11] Write access function for netmasks, bootparams, and automount
     databases for nss_files and nss_db module.
     The functions should be embedded in the nss scheme.  This is not
     hard and not all services must be supported at once.


[15] Cleaning up the header files.  Ideally, each header style should
     follow the "good examples".  Each variable and function should have
     a short description of the function and its parameters.  The prototypes
     should always contain variable names which can help to identify their
     meaning; better than

		int foo (int, int, int, int);

     Blargh!

***  The conformtest.pl tool helps cleaning the namespace.  As far as
     known the prototypes all contain parameter names.  But maybe some
     comments can be improved.


[18] Based on the sprof program we need tools to analyze the output.  The
     result should be a link map which specifies in which order the .o
     files are placed in the shared object.  This should help to improve
     code locality and result in a smaller foorprint (in code and data
     memory) since less pages are only used in small parts.


[19] A user-level STREAMS implementation should be available if the
     kernel does not provide the support.

***  This is a much lower priority job now that STREAMS are optional in
     XPG.


[20] More conversion modules for iconv(3).  Existing modules should be
     extended to do things like transliteration if this is wanted.
     For often used conversion a direct conversion function should be
     available.


[21] The nscd program and the stubs in the libc should be changed so
     that each program uses only one socket connect.  Take a look at
	http://people.redhat.com/drepper/nscd.html

     An alternative approach is to use an mmap()ed file.  The idea is
     the following:
     - the nscd creates the hash tables and the information it stores
       in it in a mmap()ed region.  This means no pointers must be
       used, only offsets.
     OR
       if POSIX shared memory is available use a named shared memory
       region to put the data in
     - each program using NSS functionality tries to open the file
       with the data.
     - by checking some timestamp (which the nscd renews frequently)
       the programs can test whether the file is still valid
     - if the file is valid look through the nscd and locate the
       appropriate hash table for the database and lookup the data.
       If it is included we are set.
     - if the data is not yet in the database we contact the nscd using
       the currently implemented methods.


[23] The `strptime' function needs to be completed.  This includes among
     other things that it must get teached about timezones.  The solution
     envisioned is to extract the timezones from the ADO timezone
     specifications.  Special care must be given names which are used
     multiple times.  Here the precedence should (probably) be according
     to the geograhical distance.  E.g., the timezone EST should be
     treated as the `Eastern Australia Time' instead of the US `Eastern
     Standard Time' if the current TZ variable is set to, say,
     Australia/Canberra or if the current locale is en_AU.


[25] Sun's nscd version implements a feature where the nscd keeps N entries
     for each database current.  I.e., if an entries lifespan is over and
     it is one of the N entries to be kept the nscd updates the information
     instead of removing the entry.

     How to decide about which N entries to keep has to be examined.
     Factors should be number of uses (of course), influenced by aging.
     Just imagine a computer used by several people.  The IDs of the current
     user should be preferred even if the last user spent more time.


[27] We need a second test suite with tests which cannot run during a normal
     `make check' run.  This test suite can require root priviledges and
     can test things like DNS (i.e., require network access),
     user-interaction, networking in general, and probably many other things.