There's no sort of recursion in there. In fact, all it does is get the links from the first page linked to in /my_search.
Let me say right now, it'd probably be better for you to use an existing tool for this. Don't know any off the top of my head, but that's why things like Google exist.
1) You need to keep track of which pages you've scanned already. Otherwise you'll get stuck in an infinite loop by scanning the same pages over and over again.
2) There needs to be some sort of restriction. Like to stay inside that one site, or to only go X links in. I assume you want the former.
3) Like I said, you also need recursion. Make a function to return a list of all links in a page (you have that already). Then make another function that takes a page and collects links from pages it links to - this function will call itself.
4) Remember that links have three (common) basic forms: with protocol (
http://example.com/path/file.php), without and absolute (/path/file.php), and without and relative (../path/file.php). Each form needs to be handled a little differently.