paulo@0: $Id: TODO,v 1.11 2004/06/04 16:31:08 hipnod Exp $ paulo@0: paulo@0: 0.0.11 paulo@0: ------ paulo@0: o test short-writing of packets and make sure it works paulo@0: o rename gt_conn_ -> gt_node_list_ paulo@0: o limit outgoing push upload rate paulo@0: o use GtTransfer by splitting up gt_transfer_new instead of paulo@0: http_incoming thing paulo@0: o insert incoming HTTP server connections into server connection list, paulo@0: to ensure the connection will get closed if there are too many paulo@0: o optimize away extra packet allocations paulo@0: o support basic packet routing paulo@0: o implement simple flow-control paulo@0: o make QRP implementation more complete paulo@0: o support QRP for reducing queries to nodes paulo@0: o implement GGEP properly paulo@0: o make generic 3-way handshake for features that have a version paulo@0: number (Vendor-Message, X-Query-Routing, GGEP) paulo@0: o make sure all global symbols are prefixed with gt_, GT_, or gnutella_ paulo@0: o move handshake code into handshake/ paulo@0: o rename gnutella_* handshake functions to gt_handshake_* paulo@0: o maybe rewrite the handshaking code completely paulo@0: o fix packet parsing when string is at the end; need to fix paulo@0: io_buf_resize to terminate paulo@0: o merge dev-intset-search branch paulo@0: o make web cache index requests once in a while paulo@0: o keep queue of outgoing pushed connects, retry paulo@0: o rename gt_search_exec gt_share_db paulo@0: o implement an abstract GtHttpHeader type paulo@0: o eliminate extra list len calc in gt_node_list_foreach() paulo@0: o test and possibly incorporate SHA1 code from Linux paulo@0: o do tests on new searching code in dev-intset-search branch paulo@0: o fix node->share_state being NULL if handshaking hasn't completed paulo@0: o use callback system for running code when connection has completed paulo@0: o fix gt_share_state.c for ultrapeer mode paulo@0: paulo@0: 0.0.12 paulo@0: ------ paulo@0: o support downloading from push proxies paulo@0: o refactor HTTP code and merge gt_http_client.c and http_request.c paulo@0: o maybe do browsing paulo@0: o break transfer code into transfer/ paulo@0: o break http code into http/ paulo@0: o break sharing code into share/ paulo@0: o break searching code into search/ paulo@0: o chop off gt_ prefixes in move paulo@0: paulo@0: 0.0.13 paulo@0: ------ paulo@0: o implement basic download mesh paulo@0: o limit outgoing push uploads paulo@0: paulo@0: 0.1.0 paulo@0: ----- paulo@0: o support dynamic querying paulo@0: o send out XML metadata paulo@0: paulo@0: {******************************************************************************} paulo@0: paulo@0: HTTP paulo@0: ---- paulo@0: paulo@0: o probably rename GtTransfer to GtHttpTransfer paulo@0: o use a new type GtHttpHeader instead of Dataset for xfer->header paulo@0: o generalize http handling and separate transfer code into callbacks paulo@0: o make GtHttpConnection type to cleanup gt_transfer_cancel() paulo@0: o make GtHttpConnectionCache for unifying pushed and non-pushed caches paulo@0: o maybe separate GtHttpTransfer into GtHttpRequest/Response paulo@0: paulo@0: paulo@0: TRANSFER paulo@0: -------- paulo@0: paulo@0: o slap push proxies in source url; have to remove them too paulo@0: paulo@0: {******************************************************************************} paulo@0: paulo@0: THINGS NEEDED FROM THE DAEMON paulo@0: ----------------------------- paulo@0: paulo@0: * make search object persistent in front-end space, and require paulo@0: explicit free from front-end, or front-end disconnect paulo@0: paulo@0: Would be better if searches were a handle existing in front-end paulo@0: space, that stuck around after search completion, and had to be paulo@0: explicitly cleaned up by a front-end. This way we could still send paulo@0: results that come in after a search has timed out to the front-end, paulo@0: and let it decide what to do with them after a search has paulo@0: entered the "completed" (but still allocated) state. paulo@0: paulo@0: Haven't seen those post-completion results actually happen in a paulo@0: while, though..I would like to think this means the search timeout paulo@0: logic is good, but since there is no tracking I've no precise idea paulo@0: how often it happens, except i don't usually see them on the debug paulo@0: console. Should really add something to check for them... paulo@0: paulo@0: * interactive searches need hash-type paulo@0: paulo@0: Interactive locate searches always assume sha1, because hash-type is paulo@0: null. The interface protocol should support a way to enumerate the paulo@0: supported hashes and pass a hash-type parameter from the user to paulo@0: locate. paulo@0: paulo@0: * protocol callback for adding new sources, that doesn't cancel paulo@0: existing transfers paulo@0: paulo@0: This is needed for download mesh support. download_add_source will paulo@0: cancel existing transfers, which is not good because one source may paulo@0: end up cancelling an ongoing transfer if it sends another source in paulo@0: the alternate location that is currently actively transfering. So, a paulo@0: new function is needed for use in the callback to skip ongoing paulo@0: transfers. paulo@0: paulo@0: * queueing w/ MAX_DOWNLOADS_PERUSER == 1 doesn't always work paulo@0: paulo@0: Because transfer_length() operates on chunks rather than sources, paulo@0: sometimes more than one download per source gets started if the paulo@0: Chunk isn't active. Need to assess the impact of changing paulo@0: transfer_length() to use Sources instead of Chunks. paulo@0: paulo@0: * need some way to enforce MAX_DOWNLOADS_PERUSER on per-source basis paulo@0: paulo@0: Active-queueing depends on MAX_DOWNLOADS_PERUSER == 1. If paulo@0: MAX_DOWNLOADS_PERUSER changes, active-queueing breaks and downloads paulo@0: will fail all the time. Should enforce MAX_DOWNLOADS_PERUSER on paulo@0: a per-source basis at runtime. paulo@0: paulo@0: Hmm, this may require a User abstraction. There was something else I paulo@0: was thinking would require that too, perhaps related to upload paulo@0: queueing and how doing p->user_cmp there is bad... paulo@0: paulo@0: * push downloads need a way to initiate transfer when no Chunk is paulo@0: allocated paulo@0: paulo@0: * some way to enforce a minimum retry wait on sources paulo@0: paulo@0: * configurable source timeouts to protect push downloads