Author Description

Login to generate an author description

Ask a Question About This Mathematician

(Abridged) We describe here the most ambitious survey currently planned in the optical, the Large Synoptic Survey Telescope (LSST). A vast array of science will be enabled by a single … (Abridged) We describe here the most ambitious survey currently planned in the optical, the Large Synoptic Survey Telescope (LSST). A vast array of science will be enabled by a single wide-deep-fast sky survey, and LSST will have unique survey capability in the faint time domain. The LSST design is driven by four main science themes: probing dark energy and dark matter, taking an inventory of the Solar System, exploring the transient optical sky, and mapping the Milky Way. LSST will be a wide-field ground-based system sited at Cerro Pach\'{o}n in northern Chile. The telescope will have an 8.4 m (6.5 m effective) primary mirror, a 9.6 deg$^2$ field of view, and a 3.2 Gigapixel camera. The standard observing sequence will consist of pairs of 15-second exposures in a given field, with two such visits in each pointing in a given night. With these repeats, the LSST system is capable of imaging about 10,000 square degrees of sky in a single filter in three nights. The typical 5$\sigma$ point-source depth in a single visit in $r$ will be $\sim 24.5$ (AB). The project is in the construction phase and will begin regular survey operations by 2022. The survey area will be contained within 30,000 deg$^2$ with $\delta<+34.5^\circ$, and will be imaged multiple times in six bands, $ugrizy$, covering the wavelength range 320--1050 nm. About 90\% of the observing time will be devoted to a deep-wide-fast survey mode which will uniformly observe a 18,000 deg$^2$ region about 800 times (summed over all six bands) during the anticipated 10 years of operations, and yield a coadded map to $r\sim27.5$. The remaining 10\% of the observing time will be allocated to projects such as a Very Deep and Fast time domain survey. The goal is to make LSST data products, including a relational database of about 32 trillion observations of 40 billion objects, available to the public and scientists around the world.
DESI (Dark Energy Spectroscopic Instrument) is a Stage IV ground-based dark energy experiment that will study baryon acoustic oscillations (BAO) and the growth of structure through redshift-space distortions with a … DESI (Dark Energy Spectroscopic Instrument) is a Stage IV ground-based dark energy experiment that will study baryon acoustic oscillations (BAO) and the growth of structure through redshift-space distortions with a wide-area galaxy and quasar redshift survey. To trace the underlying dark matter distribution, spectroscopic targets will be selected in four classes from imaging data. We will measure luminous red galaxies up to $z=1.0$. To probe the Universe out to even higher redshift, DESI will target bright [O II] emission line galaxies up to $z=1.7$. Quasars will be targeted both as direct tracers of the underlying dark matter distribution and, at higher redshifts ($ 2.1 &lt; z &lt; 3.5$), for the Ly-$α$ forest absorption features in their spectra, which will be used to trace the distribution of neutral hydrogen. When moonlight prevents efficient observations of the faint targets of the baseline survey, DESI will conduct a magnitude-limited Bright Galaxy Survey comprising approximately 10 million galaxies with a median $z\approx 0.2$. In total, more than 30 million galaxy and quasar redshifts will be obtained to measure the BAO feature and determine the matter power spectrum, including redshift space distortions.
A survey that can cover the sky in optical bands over wide fields to faint magnitudes with a fast cadence will enable many of the exciting science opportunities of the … A survey that can cover the sky in optical bands over wide fields to faint magnitudes with a fast cadence will enable many of the exciting science opportunities of the next decade. The Large Synoptic Survey Telescope (LSST) will have an effective aperture of 6.7 meters and an imaging camera with field of view of 9.6 deg^2, and will be devoted to a ten-year imaging survey over 20,000 deg^2 south of +15 deg. Each pointing will be imaged 2000 times with fifteen second exposures in six broad bands from 0.35 to 1.1 microns, to a total point-source depth of r~27.5. The LSST Science Book describes the basic parameters of the LSST hardware, software, and observing plans. The book discusses educational and outreach opportunities, then goes on to describe a broad range of science that LSST will revolutionize: mapping the inner and outer Solar System, stellar populations in the Milky Way and nearby galaxies, the structure of the Milky Way disk and halo and other objects in the Local Volume, transient and variable objects both at low and high redshift, and the properties of normal and active galaxies at low and high redshift. It then turns to far-field cosmological topics, exploring properties of supernovae to z~1, strong and weak lensing, the large-scale distribution of galaxies and baryon oscillations, and how these different probes may be combined to constrain cosmological models and the physics of dark energy.
In this paper we propose to exploit multiple related tasks for accurate multi-sensor 3D object detection. Towards this goal we present an end-to-end learnable architecture that reasons about 2D and … In this paper we propose to exploit multiple related tasks for accurate multi-sensor 3D object detection. Towards this goal we present an end-to-end learnable architecture that reasons about 2D and 3D object detection as well as ground estimation and depth completion. Our experiments show that all these tasks are complementary and help the network learn better representations by fusing information at various levels. Importantly, our approach leads the KITTI benchmark on 2D, 3D and bird's eye view object detection, while being real-time.
Abstract The Dark Energy Spectroscopic Instrument (DESI) is currently measuring the spectra of 40 million galaxies and quasars, the largest such survey ever made to probe the nature of cosmological … Abstract The Dark Energy Spectroscopic Instrument (DESI) is currently measuring the spectra of 40 million galaxies and quasars, the largest such survey ever made to probe the nature of cosmological dark energy. The 4 m Mayall telescope at Kitt Peak National Observatory has been adapted for DESI, including the construction of a 3.°2 diameter prime focus corrector that focuses astronomical light onto a 0.8 m diameter focal surface with excellent image quality over the DESI bandpass of 360–980 nm. The wide-field corrector includes six lenses, as large as 1.1 m in diameter and as heavy as 237 kilograms, including two counterrotating wedged lenses that correct for atmospheric dispersion over zenith angles from 0° to 60°. The lenses, cells, and barrel assembly all meet precise alignment tolerances on the order of tens of microns. The barrel alignment is maintained throughout a range of observing angles and temperature excursions in the Mayall dome by use of a hexapod, which is itself supported by a new cage, ring, and truss structure. In this paper we describe the design, fabrication, and performance of the new corrector and associated structure, focusing on how they meet DESI requirements. In particular, we describe the prescription and specifications of the lenses, design choices and error budgeting of the barrel assembly, stray light mitigations, and integration and test at the Mayall telescope. We conclude with some validation highlights that demonstrate the successful corrector on-sky performance, and we list some lessons learned during the multiyear fabrication phase.
Modern self-driving perception systems have been shown to improve upon processing complementary inputs such as LiDAR with images. In isolation, 2D images have been found to be extremely vulnerable to … Modern self-driving perception systems have been shown to improve upon processing complementary inputs such as LiDAR with images. In isolation, 2D images have been found to be extremely vulnerable to adversarial attacks. Yet, there have been limited studies on the adversarial robustness of multi-modal models that fuse LiDAR features with image features. Furthermore, existing works do not consider physically realizable perturbations that are consistent across the input modalities. In this paper, we showcase practical susceptibilities of multi-sensor detection by placing an adversarial object on top of a host vehicle. We focus on physically realizable and input-agnostic attacks as they are feasible to execute in practice, and show that a single universal adversary can hide different host vehicles from state-of-the-art multi-modal detectors. Our experiments demonstrate that successful attacks are primarily caused by easily corrupted image features. Furthermore, we find that in modern sensor fusion methods which project image features into 3D, adversarial attacks can exploit the projection process to generate false positives across distant regions in 3D. Towards more robust multi-modal perception systems, we show that adversarial training with feature denoising can boost robustness to such attacks significantly. However, we find that standard adversarial defenses still struggle to prevent false positives which are also caused by inaccurate associations between 3D LiDAR points and 2D pixels.
In the past few years we have seen great advances in object perception (particularly in 4D space-time dimensions) thanks to deep learning methods. However, they typically rely on large amounts … In the past few years we have seen great advances in object perception (particularly in 4D space-time dimensions) thanks to deep learning methods. However, they typically rely on large amounts of high-quality labels to achieve good performance, which often require time-consuming and expensive work by human annotators. To address this we propose an automatic annotation pipeline that generates accurate object trajectories in 3D space (i.e., 4D labels) from LiDAR point clouds. The key idea is to decompose the 4D object label into two parts: the object size in 3D that's fixed through time for rigid objects, and the motion path describing the evolution of the object's pose through time. Instead of generating a series of labels in one shot, we adopt an iterative refinement process where online generated object detections are tracked through time as the initialization. Given the cheap but noisy input, our model produces higher quality 4D labels by re-estimating the object size and smoothing the motion path, where the improvement is achieved by exploiting aggregated observations and motion cues over the entire trajectory. We validate the proposed method on a large-scale driving dataset and show a 25% reduction of human annotation efforts. We also showcase the benefits of our approach in the annotator-in-the-loop setting.
The Large Synoptic Survey Telescope (LSST) will use an active optics system (AOS) to maintain alignment and surface figure on its three large mirrors. Corrective actions fed to the LSST … The Large Synoptic Survey Telescope (LSST) will use an active optics system (AOS) to maintain alignment and surface figure on its three large mirrors. Corrective actions fed to the LSST AOS are determined from information derived from four curvature wavefront sensors located at the corners of the focal plane. Each wavefront sensor is a split detector such that the halves are 1 mm on either side of focus. In this paper, we describe the extensions to published curvature wavefront sensing algorithms needed to address challenges presented by the LSST, namely the large central obscuration, the fast f/1.23 beam, off-axis pupil distortions, and vignetting at the sensor locations. We also describe corrections needed for the split sensors and the effects from the angular separation of different stars providing the intrafocal and extrafocal images. Lastly, we present simulations that demonstrate convergence, linearity, and negligible noise when compared to atmospheric effects when the algorithm extensions are applied to the LSST optical system. The algorithm extensions reported here are generic and can easily be adapted to other wide-field optical systems including similar telescopes with large central obscuration and off-axis curvature sensing.
The Dark Energy Spectroscopic Instrument (DESI) is currently measuring the spectra of 40\,million galaxies and quasars, the largest such survey ever made to probe the nature of cosmological dark energy. … The Dark Energy Spectroscopic Instrument (DESI) is currently measuring the spectra of 40\,million galaxies and quasars, the largest such survey ever made to probe the nature of cosmological dark energy. The 4-meter Mayall telescope at Kitt Peak National Observatory has been adapted for DESI, including the construction of a 3.2-degree diameter prime focus corrector that focuses astronomical light onto a 0.8-meter diameter focal surface with excellent image quality over the DESI bandpass of 360-980nm. The wide-field corrector includes six lenses, as large as 1.1-meters in diameter and as heavy as 237\,kilograms, including two counter-rotating wedged lenses that correct for atmospheric dispersion over Zenith angles from 0 to 60 degrees. The lenses, cells, and barrel assembly all meet precise alignment tolerances on the order of tens of microns. The barrel alignment is maintained throughout a range of observing angles and temperature excursions in the Mayall dome by use of a hexapod, which is itself supported by a new cage, ring, and truss structure. In this paper we describe the design, fabrication, and performance of the new corrector and associated structure, focusing on how they meet DESI requirements. In particular we describe the prescription and specifications of the lenses, design choices and error budgeting of the barrel assembly, stray light mitigations, and integration and test at the Mayall telescope. We conclude with some validation highlights that demonstrate the successful corrector on-sky performance, and list some lessons learned during the multi-year fabrication phase.
In this paper we propose to exploit multiple related tasks for accurate multi-sensor 3D object detection. Towards this goal we present an end-to-end learnable architecture that reasons about 2D and … In this paper we propose to exploit multiple related tasks for accurate multi-sensor 3D object detection. Towards this goal we present an end-to-end learnable architecture that reasons about 2D and 3D object detection as well as ground estimation and depth completion. Our experiments show that all these tasks are complementary and help the network learn better representations by fusing information at various levels. Importantly, our approach leads the KITTI benchmark on 2D, 3D and BEV object detection, while being real time.
Sensor simulation is a key component for testing the performance of self-driving vehicles and for data augmentation to better train perception systems. Typical approaches rely on artists to create both … Sensor simulation is a key component for testing the performance of self-driving vehicles and for data augmentation to better train perception systems. Typical approaches rely on artists to create both 3D assets and their animations to generate a new scenario. This, however, does not scale. In contrast, we propose to recover the shape and motion of pedestrians from sensor readings captured in the wild by a self-driving car driving around. Towards this goal, we formulate the problem as energy minimization in a deep structured model that exploits human shape priors, reprojection consistency with 2D poses extracted from images, and a ray-caster that encourages the reconstructed mesh to agree with the LiDAR readings. Importantly, we do not require any ground-truth 3D scans or 3D pose annotations. We then incorporate the reconstructed pedestrian assets bank in a realistic LiDAR simulation system by performing motion retargeting, and show that the simulated LiDAR data can be used to significantly reduce the amount of annotated real-world data required for visual perception tasks.
In the past few years we have seen great advances in object perception (particularly in 4D space-time dimensions) thanks to deep learning methods. However, they typically rely on large amounts … In the past few years we have seen great advances in object perception (particularly in 4D space-time dimensions) thanks to deep learning methods. However, they typically rely on large amounts of high-quality labels to achieve good performance, which often require time-consuming and expensive work by human annotators. To address this we propose an automatic annotation pipeline that generates accurate object trajectories in 3D space (i.e., 4D labels) from LiDAR point clouds. The key idea is to decompose the 4D object label into two parts: the object size in 3D that's fixed through time for rigid objects, and the motion path describing the evolution of the object's pose through time. Instead of generating a series of labels in one shot, we adopt an iterative refinement process where online generated object detections are tracked through time as the initialization. Given the cheap but noisy input, our model produces higher quality 4D labels by re-estimating the object size and smoothing the motion path, where the improvement is achieved by exploiting aggregated observations and motion cues over the entire trajectory. We validate the proposed method on a large-scale driving dataset and show a 25% reduction of human annotation efforts. We also showcase the benefits of our approach in the annotator-in-the-loop setting.
Code LLMs have emerged as a specialized research field, with remarkable studies dedicated to enhancing model's coding capabilities through fine-tuning on pre-trained models. Previous fine-tuning approaches were typically tailored to … Code LLMs have emerged as a specialized research field, with remarkable studies dedicated to enhancing model's coding capabilities through fine-tuning on pre-trained models. Previous fine-tuning approaches were typically tailored to specific downstream tasks or scenarios, which meant separate fine-tuning for each task, requiring extensive training resources and posing challenges in terms of deployment and maintenance. Furthermore, these approaches failed to leverage the inherent interconnectedness among different code-related tasks. To overcome these limitations, we present a multi-task fine-tuning framework, MFTcoder, that enables simultaneous and parallel fine-tuning on multiple tasks. By incorporating various loss functions, we effectively address common challenges in multi-task learning, such as data imbalance, varying difficulty levels, and inconsistent convergence speeds. Extensive experiments have conclusively demonstrated that our multi-task fine-tuning approach outperforms both individual fine-tuning on single tasks and fine-tuning on a mixed ensemble of tasks. Moreover, MFTcoder offers efficient training capabilities, including efficient data tokenization modes and PEFT fine-tuning, resulting in significantly improved speed compared to traditional fine-tuning methods. MFTcoder seamlessly integrates with several mainstream open-source LLMs, such as CodeLLama and Qwen. Leveraging the CodeLLama foundation, our MFTcoder fine-tuned model, \textsc{CodeFuse-CodeLLama-34B}, achieves an impressive pass@1 score of 74.4\% on the HumaneEval benchmark, surpassing GPT-4 performance (67\%, zero-shot). MFTCoder is open-sourced at \url{https://github.com/codefuse-ai/MFTCOder}
The success of language models in code assistance has spurred the proposal of repository-level code completion as a means to enhance prediction accuracy, utilizing the context from the entire codebase. … The success of language models in code assistance has spurred the proposal of repository-level code completion as a means to enhance prediction accuracy, utilizing the context from the entire codebase. However, this amplified context can inadvertently increase inference latency, potentially undermining the developer experience and deterring tool adoption-a challenge we termed the Context-Latency Conundrum. This paper introduces RepoGenix, a pioneering solution designed to enhance repository-level code completion without the latency trade-off. RepoGenix uniquely fuses two types of contexts: the analogy context, rooted in code analogies, and the rationale context, which encompasses in-depth semantic relationships. We propose a novel rank truncated generation (RTG) technique that efficiently condenses these contexts into prompts with restricted size. This enables RepoGenix to deliver precise code completions while maintaining inference efficiency. Through testing with the CrossCodeEval suite, RepoGenix has demonstrated a significant leap over existing models, achieving a 40.90% to 59.75% increase in exact match (EM) accuracy for code completions and a 26.8% enhancement in inference speed. Beyond experimental validation, RepoGenix has been integrated into the workflow of a large enterprise, where it actively supports various coding tasks.
High-cadence, multiwavelength observations have continuously revealed the diversity of tidal disruption events (TDEs), thus greatly advancing our knowledge and understanding of TDEs. In this work, we conducted an intensive optical-UV … High-cadence, multiwavelength observations have continuously revealed the diversity of tidal disruption events (TDEs), thus greatly advancing our knowledge and understanding of TDEs. In this work, we conducted an intensive optical-UV and X-ray follow-up campaign of TDE AT2023lli, and found a remarkable month-long bump in its UV/optical light curve nearly two months prior to maximum brightness. The bump represents the longest separation time from the main peak among known TDEs to date. The main UV/optical outburst declines as $t^{-4.10}$, making it one of the fastest decaying optically selected TDEs. Furthermore, we detected sporadic X-ray emission 30 days after the UV/optical peak, accompanied by a reduction in the period of inactivity. It is proposed that the UV/optical bump could be caused by the self-intersection of the stream debris, whereas the primary peak is generated by the reprocessed emission of the accretion process. In addition, our results suggest that episodic X-ray radiation during the initial phase of decline may be due to the patched obscurer surrounding the accretion disk, a phenomenon associated with the inhomogeneous reprocessing process. The double TDE scenario, in which two stars are disrupted in sequence, is also a possible explanation for producing the observed early bump and main peak. We anticipate that the multicolor light curves of TDEs, especially in the very early stages, and the underlying physics can be better understood in the near future with the assistance of dedicated surveys such as the deep high-cadence survey of the 2.5-meter Wide Field Survey Telescope (WFST).
Abstract The Dark Energy Spectroscopic Instrument (DESI) is currently measuring the spectra of 40 million galaxies and quasars, the largest such survey ever made to probe the nature of cosmological … Abstract The Dark Energy Spectroscopic Instrument (DESI) is currently measuring the spectra of 40 million galaxies and quasars, the largest such survey ever made to probe the nature of cosmological dark energy. The 4 m Mayall telescope at Kitt Peak National Observatory has been adapted for DESI, including the construction of a 3.°2 diameter prime focus corrector that focuses astronomical light onto a 0.8 m diameter focal surface with excellent image quality over the DESI bandpass of 360–980 nm. The wide-field corrector includes six lenses, as large as 1.1 m in diameter and as heavy as 237 kilograms, including two counterrotating wedged lenses that correct for atmospheric dispersion over zenith angles from 0° to 60°. The lenses, cells, and barrel assembly all meet precise alignment tolerances on the order of tens of microns. The barrel alignment is maintained throughout a range of observing angles and temperature excursions in the Mayall dome by use of a hexapod, which is itself supported by a new cage, ring, and truss structure. In this paper we describe the design, fabrication, and performance of the new corrector and associated structure, focusing on how they meet DESI requirements. In particular, we describe the prescription and specifications of the lenses, design choices and error budgeting of the barrel assembly, stray light mitigations, and integration and test at the Mayall telescope. We conclude with some validation highlights that demonstrate the successful corrector on-sky performance, and we list some lessons learned during the multiyear fabrication phase.
High-cadence, multiwavelength observations have continuously revealed the diversity of tidal disruption events (TDEs), thus greatly advancing our knowledge and understanding of TDEs. In this work, we conducted an intensive optical-UV … High-cadence, multiwavelength observations have continuously revealed the diversity of tidal disruption events (TDEs), thus greatly advancing our knowledge and understanding of TDEs. In this work, we conducted an intensive optical-UV and X-ray follow-up campaign of TDE AT2023lli, and found a remarkable month-long bump in its UV/optical light curve nearly two months prior to maximum brightness. The bump represents the longest separation time from the main peak among known TDEs to date. The main UV/optical outburst declines as $t^{-4.10}$, making it one of the fastest decaying optically selected TDEs. Furthermore, we detected sporadic X-ray emission 30 days after the UV/optical peak, accompanied by a reduction in the period of inactivity. It is proposed that the UV/optical bump could be caused by the self-intersection of the stream debris, whereas the primary peak is generated by the reprocessed emission of the accretion process. In addition, our results suggest that episodic X-ray radiation during the initial phase of decline may be due to the patched obscurer surrounding the accretion disk, a phenomenon associated with the inhomogeneous reprocessing process. The double TDE scenario, in which two stars are disrupted in sequence, is also a possible explanation for producing the observed early bump and main peak. We anticipate that the multicolor light curves of TDEs, especially in the very early stages, and the underlying physics can be better understood in the near future with the assistance of dedicated surveys such as the deep high-cadence survey of the 2.5-meter Wide Field Survey Telescope (WFST).
The success of language models in code assistance has spurred the proposal of repository-level code completion as a means to enhance prediction accuracy, utilizing the context from the entire codebase. … The success of language models in code assistance has spurred the proposal of repository-level code completion as a means to enhance prediction accuracy, utilizing the context from the entire codebase. However, this amplified context can inadvertently increase inference latency, potentially undermining the developer experience and deterring tool adoption-a challenge we termed the Context-Latency Conundrum. This paper introduces RepoGenix, a pioneering solution designed to enhance repository-level code completion without the latency trade-off. RepoGenix uniquely fuses two types of contexts: the analogy context, rooted in code analogies, and the rationale context, which encompasses in-depth semantic relationships. We propose a novel rank truncated generation (RTG) technique that efficiently condenses these contexts into prompts with restricted size. This enables RepoGenix to deliver precise code completions while maintaining inference efficiency. Through testing with the CrossCodeEval suite, RepoGenix has demonstrated a significant leap over existing models, achieving a 40.90% to 59.75% increase in exact match (EM) accuracy for code completions and a 26.8% enhancement in inference speed. Beyond experimental validation, RepoGenix has been integrated into the workflow of a large enterprise, where it actively supports various coding tasks.
The Dark Energy Spectroscopic Instrument (DESI) is currently measuring the spectra of 40\,million galaxies and quasars, the largest such survey ever made to probe the nature of cosmological dark energy. … The Dark Energy Spectroscopic Instrument (DESI) is currently measuring the spectra of 40\,million galaxies and quasars, the largest such survey ever made to probe the nature of cosmological dark energy. The 4-meter Mayall telescope at Kitt Peak National Observatory has been adapted for DESI, including the construction of a 3.2-degree diameter prime focus corrector that focuses astronomical light onto a 0.8-meter diameter focal surface with excellent image quality over the DESI bandpass of 360-980nm. The wide-field corrector includes six lenses, as large as 1.1-meters in diameter and as heavy as 237\,kilograms, including two counter-rotating wedged lenses that correct for atmospheric dispersion over Zenith angles from 0 to 60 degrees. The lenses, cells, and barrel assembly all meet precise alignment tolerances on the order of tens of microns. The barrel alignment is maintained throughout a range of observing angles and temperature excursions in the Mayall dome by use of a hexapod, which is itself supported by a new cage, ring, and truss structure. In this paper we describe the design, fabrication, and performance of the new corrector and associated structure, focusing on how they meet DESI requirements. In particular we describe the prescription and specifications of the lenses, design choices and error budgeting of the barrel assembly, stray light mitigations, and integration and test at the Mayall telescope. We conclude with some validation highlights that demonstrate the successful corrector on-sky performance, and list some lessons learned during the multi-year fabrication phase.
Code LLMs have emerged as a specialized research field, with remarkable studies dedicated to enhancing model's coding capabilities through fine-tuning on pre-trained models. Previous fine-tuning approaches were typically tailored to … Code LLMs have emerged as a specialized research field, with remarkable studies dedicated to enhancing model's coding capabilities through fine-tuning on pre-trained models. Previous fine-tuning approaches were typically tailored to specific downstream tasks or scenarios, which meant separate fine-tuning for each task, requiring extensive training resources and posing challenges in terms of deployment and maintenance. Furthermore, these approaches failed to leverage the inherent interconnectedness among different code-related tasks. To overcome these limitations, we present a multi-task fine-tuning framework, MFTcoder, that enables simultaneous and parallel fine-tuning on multiple tasks. By incorporating various loss functions, we effectively address common challenges in multi-task learning, such as data imbalance, varying difficulty levels, and inconsistent convergence speeds. Extensive experiments have conclusively demonstrated that our multi-task fine-tuning approach outperforms both individual fine-tuning on single tasks and fine-tuning on a mixed ensemble of tasks. Moreover, MFTcoder offers efficient training capabilities, including efficient data tokenization modes and PEFT fine-tuning, resulting in significantly improved speed compared to traditional fine-tuning methods. MFTcoder seamlessly integrates with several mainstream open-source LLMs, such as CodeLLama and Qwen. Leveraging the CodeLLama foundation, our MFTcoder fine-tuned model, \textsc{CodeFuse-CodeLLama-34B}, achieves an impressive pass@1 score of 74.4\% on the HumaneEval benchmark, surpassing GPT-4 performance (67\%, zero-shot). MFTCoder is open-sourced at \url{https://github.com/codefuse-ai/MFTCOder}
In the past few years we have seen great advances in object perception (particularly in 4D space-time dimensions) thanks to deep learning methods. However, they typically rely on large amounts … In the past few years we have seen great advances in object perception (particularly in 4D space-time dimensions) thanks to deep learning methods. However, they typically rely on large amounts of high-quality labels to achieve good performance, which often require time-consuming and expensive work by human annotators. To address this we propose an automatic annotation pipeline that generates accurate object trajectories in 3D space (i.e., 4D labels) from LiDAR point clouds. The key idea is to decompose the 4D object label into two parts: the object size in 3D that's fixed through time for rigid objects, and the motion path describing the evolution of the object's pose through time. Instead of generating a series of labels in one shot, we adopt an iterative refinement process where online generated object detections are tracked through time as the initialization. Given the cheap but noisy input, our model produces higher quality 4D labels by re-estimating the object size and smoothing the motion path, where the improvement is achieved by exploiting aggregated observations and motion cues over the entire trajectory. We validate the proposed method on a large-scale driving dataset and show a 25% reduction of human annotation efforts. We also showcase the benefits of our approach in the annotator-in-the-loop setting.
In the past few years we have seen great advances in object perception (particularly in 4D space-time dimensions) thanks to deep learning methods. However, they typically rely on large amounts … In the past few years we have seen great advances in object perception (particularly in 4D space-time dimensions) thanks to deep learning methods. However, they typically rely on large amounts of high-quality labels to achieve good performance, which often require time-consuming and expensive work by human annotators. To address this we propose an automatic annotation pipeline that generates accurate object trajectories in 3D space (i.e., 4D labels) from LiDAR point clouds. The key idea is to decompose the 4D object label into two parts: the object size in 3D that's fixed through time for rigid objects, and the motion path describing the evolution of the object's pose through time. Instead of generating a series of labels in one shot, we adopt an iterative refinement process where online generated object detections are tracked through time as the initialization. Given the cheap but noisy input, our model produces higher quality 4D labels by re-estimating the object size and smoothing the motion path, where the improvement is achieved by exploiting aggregated observations and motion cues over the entire trajectory. We validate the proposed method on a large-scale driving dataset and show a 25% reduction of human annotation efforts. We also showcase the benefits of our approach in the annotator-in-the-loop setting.
Modern self-driving perception systems have been shown to improve upon processing complementary inputs such as LiDAR with images. In isolation, 2D images have been found to be extremely vulnerable to … Modern self-driving perception systems have been shown to improve upon processing complementary inputs such as LiDAR with images. In isolation, 2D images have been found to be extremely vulnerable to adversarial attacks. Yet, there have been limited studies on the adversarial robustness of multi-modal models that fuse LiDAR features with image features. Furthermore, existing works do not consider physically realizable perturbations that are consistent across the input modalities. In this paper, we showcase practical susceptibilities of multi-sensor detection by placing an adversarial object on top of a host vehicle. We focus on physically realizable and input-agnostic attacks as they are feasible to execute in practice, and show that a single universal adversary can hide different host vehicles from state-of-the-art multi-modal detectors. Our experiments demonstrate that successful attacks are primarily caused by easily corrupted image features. Furthermore, we find that in modern sensor fusion methods which project image features into 3D, adversarial attacks can exploit the projection process to generate false positives across distant regions in 3D. Towards more robust multi-modal perception systems, we show that adversarial training with feature denoising can boost robustness to such attacks significantly. However, we find that standard adversarial defenses still struggle to prevent false positives which are also caused by inaccurate associations between 3D LiDAR points and 2D pixels.
Sensor simulation is a key component for testing the performance of self-driving vehicles and for data augmentation to better train perception systems. Typical approaches rely on artists to create both … Sensor simulation is a key component for testing the performance of self-driving vehicles and for data augmentation to better train perception systems. Typical approaches rely on artists to create both 3D assets and their animations to generate a new scenario. This, however, does not scale. In contrast, we propose to recover the shape and motion of pedestrians from sensor readings captured in the wild by a self-driving car driving around. Towards this goal, we formulate the problem as energy minimization in a deep structured model that exploits human shape priors, reprojection consistency with 2D poses extracted from images, and a ray-caster that encourages the reconstructed mesh to agree with the LiDAR readings. Importantly, we do not require any ground-truth 3D scans or 3D pose annotations. We then incorporate the reconstructed pedestrian assets bank in a realistic LiDAR simulation system by performing motion retargeting, and show that the simulated LiDAR data can be used to significantly reduce the amount of annotated real-world data required for visual perception tasks.
In this paper we propose to exploit multiple related tasks for accurate multi-sensor 3D object detection. Towards this goal we present an end-to-end learnable architecture that reasons about 2D and … In this paper we propose to exploit multiple related tasks for accurate multi-sensor 3D object detection. Towards this goal we present an end-to-end learnable architecture that reasons about 2D and 3D object detection as well as ground estimation and depth completion. Our experiments show that all these tasks are complementary and help the network learn better representations by fusing information at various levels. Importantly, our approach leads the KITTI benchmark on 2D, 3D and BEV object detection, while being real time.
In this paper we propose to exploit multiple related tasks for accurate multi-sensor 3D object detection. Towards this goal we present an end-to-end learnable architecture that reasons about 2D and … In this paper we propose to exploit multiple related tasks for accurate multi-sensor 3D object detection. Towards this goal we present an end-to-end learnable architecture that reasons about 2D and 3D object detection as well as ground estimation and depth completion. Our experiments show that all these tasks are complementary and help the network learn better representations by fusing information at various levels. Importantly, our approach leads the KITTI benchmark on 2D, 3D and bird's eye view object detection, while being real-time.
(Abridged) We describe here the most ambitious survey currently planned in the optical, the Large Synoptic Survey Telescope (LSST). A vast array of science will be enabled by a single … (Abridged) We describe here the most ambitious survey currently planned in the optical, the Large Synoptic Survey Telescope (LSST). A vast array of science will be enabled by a single wide-deep-fast sky survey, and LSST will have unique survey capability in the faint time domain. The LSST design is driven by four main science themes: probing dark energy and dark matter, taking an inventory of the Solar System, exploring the transient optical sky, and mapping the Milky Way. LSST will be a wide-field ground-based system sited at Cerro Pach\'{o}n in northern Chile. The telescope will have an 8.4 m (6.5 m effective) primary mirror, a 9.6 deg$^2$ field of view, and a 3.2 Gigapixel camera. The standard observing sequence will consist of pairs of 15-second exposures in a given field, with two such visits in each pointing in a given night. With these repeats, the LSST system is capable of imaging about 10,000 square degrees of sky in a single filter in three nights. The typical 5$\sigma$ point-source depth in a single visit in $r$ will be $\sim 24.5$ (AB). The project is in the construction phase and will begin regular survey operations by 2022. The survey area will be contained within 30,000 deg$^2$ with $\delta<+34.5^\circ$, and will be imaged multiple times in six bands, $ugrizy$, covering the wavelength range 320--1050 nm. About 90\% of the observing time will be devoted to a deep-wide-fast survey mode which will uniformly observe a 18,000 deg$^2$ region about 800 times (summed over all six bands) during the anticipated 10 years of operations, and yield a coadded map to $r\sim27.5$. The remaining 10\% of the observing time will be allocated to projects such as a Very Deep and Fast time domain survey. The goal is to make LSST data products, including a relational database of about 32 trillion observations of 40 billion objects, available to the public and scientists around the world.
DESI (Dark Energy Spectroscopic Instrument) is a Stage IV ground-based dark energy experiment that will study baryon acoustic oscillations (BAO) and the growth of structure through redshift-space distortions with a … DESI (Dark Energy Spectroscopic Instrument) is a Stage IV ground-based dark energy experiment that will study baryon acoustic oscillations (BAO) and the growth of structure through redshift-space distortions with a wide-area galaxy and quasar redshift survey. To trace the underlying dark matter distribution, spectroscopic targets will be selected in four classes from imaging data. We will measure luminous red galaxies up to $z=1.0$. To probe the Universe out to even higher redshift, DESI will target bright [O II] emission line galaxies up to $z=1.7$. Quasars will be targeted both as direct tracers of the underlying dark matter distribution and, at higher redshifts ($ 2.1 &lt; z &lt; 3.5$), for the Ly-$α$ forest absorption features in their spectra, which will be used to trace the distribution of neutral hydrogen. When moonlight prevents efficient observations of the faint targets of the baseline survey, DESI will conduct a magnitude-limited Bright Galaxy Survey comprising approximately 10 million galaxies with a median $z\approx 0.2$. In total, more than 30 million galaxy and quasar redshifts will be obtained to measure the BAO feature and determine the matter power spectrum, including redshift space distortions.
The Large Synoptic Survey Telescope (LSST) will use an active optics system (AOS) to maintain alignment and surface figure on its three large mirrors. Corrective actions fed to the LSST … The Large Synoptic Survey Telescope (LSST) will use an active optics system (AOS) to maintain alignment and surface figure on its three large mirrors. Corrective actions fed to the LSST AOS are determined from information derived from four curvature wavefront sensors located at the corners of the focal plane. Each wavefront sensor is a split detector such that the halves are 1 mm on either side of focus. In this paper, we describe the extensions to published curvature wavefront sensing algorithms needed to address challenges presented by the LSST, namely the large central obscuration, the fast f/1.23 beam, off-axis pupil distortions, and vignetting at the sensor locations. We also describe corrections needed for the split sensors and the effects from the angular separation of different stars providing the intrafocal and extrafocal images. Lastly, we present simulations that demonstrate convergence, linearity, and negligible noise when compared to atmospheric effects when the algorithm extensions are applied to the LSST optical system. The algorithm extensions reported here are generic and can easily be adapted to other wide-field optical systems including similar telescopes with large central obscuration and off-axis curvature sensing.
A survey that can cover the sky in optical bands over wide fields to faint magnitudes with a fast cadence will enable many of the exciting science opportunities of the … A survey that can cover the sky in optical bands over wide fields to faint magnitudes with a fast cadence will enable many of the exciting science opportunities of the next decade. The Large Synoptic Survey Telescope (LSST) will have an effective aperture of 6.7 meters and an imaging camera with field of view of 9.6 deg^2, and will be devoted to a ten-year imaging survey over 20,000 deg^2 south of +15 deg. Each pointing will be imaged 2000 times with fifteen second exposures in six broad bands from 0.35 to 1.1 microns, to a total point-source depth of r~27.5. The LSST Science Book describes the basic parameters of the LSST hardware, software, and observing plans. The book discusses educational and outreach opportunities, then goes on to describe a broad range of science that LSST will revolutionize: mapping the inner and outer Solar System, stellar populations in the Milky Way and nearby galaxies, the structure of the Milky Way disk and halo and other objects in the Local Volume, transient and variable objects both at low and high redshift, and the properties of normal and active galaxies at low and high redshift. It then turns to far-field cosmological topics, exploring properties of supernovae to z~1, strong and weak lensing, the large-scale distribution of galaxies and baryon oscillations, and how these different probes may be combined to constrain cosmological models and the physics of dark energy.
We address the problem of real-time 3D object detection from point clouds in the context of autonomous driving. Speed is critical as detection is a necessary component for safety. Existing … We address the problem of real-time 3D object detection from point clouds in the context of autonomous driving. Speed is critical as detection is a necessary component for safety. Existing approaches are, however, expensive in computation due to high dimensionality of point clouds. We utilize the 3D data more efficiently by representing the scene from the Bird's Eye View (BEV), and propose PIXOR, a proposal-free, single-stage detector that outputs oriented 3D object estimates decoded from pixel-wise neural network predictions. The input representation, network architecture, and model optimization are specially designed to balance high accuracy and real-time efficiency. We validate PIXOR on two datasets: the KITTI BEV object detection benchmark, and a large-scale 3D vehicle detection benchmark. In both datasets we show that the proposed detector surpasses other state-of-the-art methods notably in terms of Average Precision (AP), while still runs at 10 FPS.
We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has … We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm.
This paper aims at high-accuracy 3D object detection in autonomous driving scenario. We propose Multi-View 3D networks (MV3D), a sensory-fusion framework that takes both LIDAR point cloud and RGB images … This paper aims at high-accuracy 3D object detection in autonomous driving scenario. We propose Multi-View 3D networks (MV3D), a sensory-fusion framework that takes both LIDAR point cloud and RGB images as input and predicts oriented 3D bounding boxes. We encode the sparse 3D point cloud with a compact multi-view representation. The network is composed of two subnetworks: one for 3D object proposal generation and another for multi-view feature fusion. The proposal network generates 3D candidate boxes efficiently from the birds eye view representation of 3D point cloud. We design a deep fusion scheme to combine region-wise features from multiple views and enable interactions between intermediate layers of different paths. Experiments on the challenging KITTI benchmark show that our approach outperforms the state-of-the-art by around 25% and 30% AP on the tasks of 3D localization and 3D detection. In addition, for 2D detection, our approach obtains 14.9% higher AP than the state-of-the-art on the hard data among the LIDAR-based methods.
In this paper we propose a novel deep neural network that is able to jointly reason about 3D detection, tracking and motion forecasting given data captured by a 3D sensor. … In this paper we propose a novel deep neural network that is able to jointly reason about 3D detection, tracking and motion forecasting given data captured by a 3D sensor. By jointly reasoning about these tasks, our holistic approach is more robust to occlusion as well as sparse data at range. Our approach performs 3D convolutions across space and time over a bird's eye view representation of the 3D world, which is very efficient in terms of both memory and computation. Our experiments on a new very large scale dataset captured in several north american cities, show that we can outperform the state-of-the-art by a large margin. Importantly, by sharing computation we can perform all tasks in as little as 30 ms.
In this paper, we propose a novel 3D object detector that can exploit both LIDAR as well as cameras to perform very accurate localization. Towards this goal, we design an … In this paper, we propose a novel 3D object detector that can exploit both LIDAR as well as cameras to perform very accurate localization. Towards this goal, we design an end-to-end learnable architecture that exploits continuous convolutions to fuse image and LIDAR feature maps at different levels of resolution. Our proposed continuous fusion layer encode both discrete-state image features as well as continuous geometric information. This enables us to design a novel, reliable and efficient end-to-end learnable 3D object detector based on multiple sensors. Our experimental evaluation on both KITTI as well as a large scale 3D object detection benchmark shows significant improvements over the state of the art.
In this paper we show that High-Definition (HD) maps provide strong priors that can boost the performance and robustness of modern 3D object detectors. Towards this goal, we design a … In this paper we show that High-Definition (HD) maps provide strong priors that can boost the performance and robustness of modern 3D object detectors. Towards this goal, we design a single stage detector that extracts geometric and semantic features from the HD maps. As maps might not be available everywhere, we also propose a map prediction module that estimates the map on the fly from raw LiDAR data. We conduct extensive experiments on KITTI as well as a large-scale 3D detection benchmark containing 1 million frames, and show that the proposed map-aware detector consistently outperforms the state-of-the-art in both mapped and un-mapped scenarios. Importantly the whole framework runs at 20 frames per second.
In this work, we study 3D object detection from RGBD data in both indoor and outdoor scenes. While previous methods focus on images or 3D voxels, often obscuring natural 3D … In this work, we study 3D object detection from RGBD data in both indoor and outdoor scenes. While previous methods focus on images or 3D voxels, often obscuring natural 3D patterns and invariances of 3D data, we directly operate on raw point clouds by popping up RGB-D scans. However, a key challenge of this approach is how to efficiently localize objects in point clouds of large-scale scenes (region proposal). Instead of solely relying on 3D proposals, our method leverages both mature 2D object detectors and advanced 3D deep learning for object localization, achieving efficiency as well as high recall for even small objects. Benefited from learning directly in raw point clouds, our method is also able to precisely estimate 3D bounding boxes even under strong occlusion or with very sparse points. Evaluated on KITTI and SUN RGB-D 3D detection benchmarks, our method outperforms the state of the art by remarkable margins while having real-time capability.
Accurate detection of objects in 3D point clouds is a central problem in many applications, such as autonomous navigation, housekeeping robots, and augmented/virtual reality. To interface a highly sparse LiDAR … Accurate detection of objects in 3D point clouds is a central problem in many applications, such as autonomous navigation, housekeeping robots, and augmented/virtual reality. To interface a highly sparse LiDAR point cloud with a region proposal network (RPN), most existing efforts have focused on hand-crafted feature representations, for example, a bird's eye view projection. In this work, we remove the need of manual feature engineering for 3D point clouds and propose VoxelNet, a generic 3D detection network that unifies feature extraction and bounding box prediction into a single stage, end-to-end trainable deep network. Specifically, VoxelNet divides a point cloud into equally spaced 3D voxels and transforms a group of points within each voxel into a unified feature representation through the newly introduced voxel feature encoding (VFE) layer. In this way, the point cloud is encoded as a descriptive volumetric representation, which is then connected to a RPN to generate detections. Experiments on the KITTI car detection benchmark show that VoxelNet outperforms the state-of-the-art LiDAR based 3D detection methods by a large margin. Furthermore, our network learns an effective discriminative representation of objects with various geometries, leading to encouraging results in 3D detection of pedestrians and cyclists, based on only LiDAR.
Autonomous driving requires 3D perception of vehicles and other objects in the in environment. Much of the current methods support 2D vehicle detection. This paper proposes a flexible pipeline to … Autonomous driving requires 3D perception of vehicles and other objects in the in environment. Much of the current methods support 2D vehicle detection. This paper proposes a flexible pipeline to adopt any 2D detection network and fuse it with a 3D point cloud to generate 3D information with minimum changes of the 2D detection networks. To identify the 3D box, an effective model fitting algorithm is developed based on generalised car models and score maps. A two-stage convolutional neural network (CNN) is proposed to refine the detected 3D box. This pipeline is tested on the KITTI dataset using two different 2D detection networks. The 3D detection results based on these two networks are similar, demonstrating the flexibility of the proposed pipeline. The results rank second among the 3D detection algorithms, indicating its competencies in 3D detection.
In this paper we propose a novel deep neural network that is able to jointly reason about 3D detection, tracking and motion forecasting given data captured by a 3D sensor. … In this paper we propose a novel deep neural network that is able to jointly reason about 3D detection, tracking and motion forecasting given data captured by a 3D sensor. By jointly reasoning about these tasks, our holistic approach is more robust to occlusion as well as sparse data at range. Our approach performs 3D convolutions across space and time over a bird's eye view representation of the 3D world, which is very efficient in terms of both memory and computation. Our experiments on a new very large scale dataset captured in several north american cities, show that we can outperform the state-of-the-art by a large margin. Importantly, by sharing computation we can perform all tasks in as little as 30 ms.
The goal of this paper is to perform 3D object detection in the context of autonomous driving. Our method aims at generating a set of high-quality 3D object proposals by … The goal of this paper is to perform 3D object detection in the context of autonomous driving. Our method aims at generating a set of high-quality 3D object proposals by exploiting stereo imagery. We formulate the problem as minimizing an energy function that encodes object size priors, placement of objects on the ground plane as well as several depth informed features that reason about free space, point cloud densities and distance to the ground. We then exploit a CNN on top of these proposals to perform object detection. In particular, we employ a convolutional neural net (CNN) that exploits context and depth information to jointly regress to 3D bounding box coordinates and object pose. Our experiments show significant performance gains over existing RGB and RGB-D object proposal methods on the challenging KITTI benchmark. When combined with the CNN, our approach outperforms all existing results in object detection and orientation estimation tasks for all three KITTI object classes. Furthermore, we experiment also with the setting where LIDAR information is available, and show that using both LIDAR and stereo leads to the best result.
Most of the recent successful methods in accurate object detection and localization used some variants of R-CNN style two stage Convolutional Neural Networks (CNN) where plausible regions were proposed in … Most of the recent successful methods in accurate object detection and localization used some variants of R-CNN style two stage Convolutional Neural Networks (CNN) where plausible regions were proposed in the first stage then followed by a second stage for decision refinement. Despite the simplicity of training and the efficiency in deployment, the single stage detection methods have not been as competitive when evaluated in benchmarks consider mAP for high IoU thresholds. In this paper, we proposed a novel single stage end-to-end trainable object detection network to overcome this limitation. We achieved this by introducing Recurrent Rolling Convolution (RRC) architecture over multi-scale feature maps to construct object classifiers and bounding box regressors which are deep in context. We evaluated our method in the challenging KITTI dataset which measures methods under IoU threshold of 0.7. We showed that with RRC, a single reduced VGG-16 based model already significantly outperformed all the previously published results. At the time this paper was written our models ranked the first in KITTI car detection (the hard level), the first in cyclist detection and the second in pedestrian detection. These results were not reached by the previous single stage methods. The code is publicly available.
Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But pyramid representations have been avoided in recent object detectors that are based on deep … Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But pyramid representations have been avoided in recent object detectors that are based on deep convolutional networks, partially because they are slow to compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost. A top-down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows significant improvement as a generic feature extractor in several applications. Using a basic Faster R-CNN system, our method achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model entries including those from the COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. Code will be made publicly available.
Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly … Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers - 8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.
Few prior works study deep learning on point sets. PointNet by Qi et al. is a pioneer in this direction. However, by design PointNet does not capture local structures induced … Few prior works study deep learning on point sets. PointNet by Qi et al. is a pioneer in this direction. However, by design PointNet does not capture local structures induced by the metric space points live in, limiting its ability to recognize fine-grained patterns and generalizability to complex scenes. In this work, we introduce a hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set. By exploiting metric space distances, our network is able to learn local features with increasing contextual scales. With further observation that point sets are usually sampled with varying densities, which results in greatly decreased performance for networks trained on uniform densities, we propose novel set learning layers to adaptively combine features from multiple scales. Experiments show that our network called PointNet++ is able to learn deep point set features efficiently and robustly. In particular, results significantly better than state-of-the-art have been obtained on challenging benchmarks of 3D point clouds.
We present AVOD, an Aggregate View Object Detection network for autonomous driving scenarios. The proposed neural network architecture uses LIDAR point clouds and RGB images to generate features that are … We present AVOD, an Aggregate View Object Detection network for autonomous driving scenarios. The proposed neural network architecture uses LIDAR point clouds and RGB images to generate features that are shared by two subnetworks: a region proposal network (RPN) and a second stage detector network. The proposed RPN uses a novel architecture capable of performing multimodal feature fusion on high resolution feature maps to generate reliable 3D object proposals for multiple object classes in road scenes. Using these proposals, the second stage detection network performs accurate oriented 3D bounding box regression and category classification to predict the extents, orientation, and classification of objects in 3D space. Our proposed architecture is shown to produce state of the art results on the KITTI 3D object detection benchmark [1] while running in real time with a low memory footprint, making it a suitable candidate for deployment on autonomous vehicles. Code is available at: https://github.com/kujason/avod.
Point cloud is an important type of geometric data structure. Due to its irregular format, most researchers transform such data to regular 3D voxel grids or collections of images. This, … Point cloud is an important type of geometric data structure. Due to its irregular format, most researchers transform such data to regular 3D voxel grids or collections of images. This, however, renders data unnecessarily voluminous and causes issues. In this paper, we design a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input. Our network, named PointNet, provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. Though simple, PointNet is highly efficient and effective. Empirically, it shows strong performance on par or even better than state of the art. Theoretically, we provide analysis towards understanding of what the network has learnt and why the network is robust with respect to input perturbation and corruption.
We present a comprehensive methodology for the simulation of astronomical images from optical survey telescopes. We use a photon Monte Carlo approach to construct images by sampling photons from models … We present a comprehensive methodology for the simulation of astronomical images from optical survey telescopes. We use a photon Monte Carlo approach to construct images by sampling photons from models of astronomical source populations, and then simulating those photons through the system as they interact with the atmosphere, telescope, and camera. We demonstrate that all physical effects for optical light that determine the shapes, locations, and brightnesses of individual stars and galaxies can be accurately represented in this formalism. By using large scale grid computing, modern processors, and an efficient implementation that can produce 400,000 photons s−1, we demonstrate that even very large optical surveys can be now be simulated. We demonstrate that we are able to (1) construct kilometer scale phase screens necessary for wide-field telescopes, (2) reproduce atmospheric point-spread function moments using a fast novel hybrid geometric/Fourier technique for non-diffraction limited telescopes, (3) accurately reproduce the expected spot diagrams for complex aspheric optical designs, and (4) recover system effective area predicted from analytic photometry integrals. This new code, the Photon Simulator (PhoSim), is publicly available. We have implemented the Large Synoptic Survey Telescope design, and it can be extended to other telescopes. We expect that because of the comprehensive physics implemented in PhoSim, it will be used by the community to plan future observations, interpret detailed existing observations, and quantify systematics related to various astronomical measurements. Future development and validation by comparisons with real data will continue to improve the fidelity and usability of the code.
Morphology is a powerful indicator of a galaxy’s dynamical and merger history. It is strongly correlated with many physical parameters, including mass, star formation history and the distribution of mass. … Morphology is a powerful indicator of a galaxy’s dynamical and merger history. It is strongly correlated with many physical parameters, including mass, star formation history and the distribution of mass. The Galaxy Zoo project collected simple morphological classifications of nearly 900 000 galaxies drawn from the Sloan Digital Sky Survey, contributed by hundreds of thousands of volunteers. This large number of classifications allows us to exclude classifier error, and measure the influence of subtle biases inherent in morphological classification. This paper presents the data collected by the project, alongside measures of classification accuracy and bias. The data are now publicly available and full catalogues can be downloaded in electronic format from http://data.galaxyzoo.org.
A survey that can cover the sky in optical bands over wide fields to faint magnitudes with a fast cadence will enable many of the exciting science opportunities of the … A survey that can cover the sky in optical bands over wide fields to faint magnitudes with a fast cadence will enable many of the exciting science opportunities of the next decade. The Large Synoptic Survey Telescope (LSST) will have an effective aperture of 6.7 meters and an imaging camera with field of view of 9.6 deg^2, and will be devoted to a ten-year imaging survey over 20,000 deg^2 south of +15 deg. Each pointing will be imaged 2000 times with fifteen second exposures in six broad bands from 0.35 to 1.1 microns, to a total point-source depth of r~27.5. The LSST Science Book describes the basic parameters of the LSST hardware, software, and observing plans. The book discusses educational and outreach opportunities, then goes on to describe a broad range of science that LSST will revolutionize: mapping the inner and outer Solar System, stellar populations in the Milky Way and nearby galaxies, the structure of the Milky Way disk and halo and other objects in the Local Volume, transient and variable objects both at low and high redshift, and the properties of normal and active galaxies at low and high redshift. It then turns to far-field cosmological topics, exploring properties of supernovae to z~1, strong and weak lensing, the large-scale distribution of galaxies and baryon oscillations, and how these different probes may be combined to constrain cosmological models and the physics of dark energy.
A longstanding question in computer vision concerns the representation of 3D shapes for recognition: should 3D shapes be represented with descriptors operating on their native 3D formats, such as voxel … A longstanding question in computer vision concerns the representation of 3D shapes for recognition: should 3D shapes be represented with descriptors operating on their native 3D formats, such as voxel grid or polygon mesh, or can they be effectively represented with view-based descriptors? We address this question in the context of learning to recognize 3D shapes from a collection of their rendered views on 2D images. We first present a standard CNN architecture trained to recognize the shapes' rendered views independently of each other, and show that a 3D shape can be recognized even from a single view at an accuracy far higher than using state-of-the-art 3D shape descriptors. Recognition rates further increase when multiple views of the shapes are provided. In addition, we present a novel CNN architecture that combines information from multiple views of a 3D shape into a single and compact shape descriptor offering even better recognition performance. The same architecture can be applied to accurately recognize human hand-drawn sketches of shapes. We conclude that a collection of 2D views can be highly informative for 3D shape recognition and is amenable to emerging CNN architectures and their derivatives.
We examine how lensing tomography with the bispectrum and power spectrum can constrain cosmological parameters and the equation of state of dark energy.Our analysis uses the full information at the … We examine how lensing tomography with the bispectrum and power spectrum can constrain cosmological parameters and the equation of state of dark energy.Our analysis uses the full information at the two-and three-point level from angular scales of a few degrees to 5 arcminutes (50 ≤ l ≤ 3000), which will be probed by lensing surveys.We use all triangle configurations, cross-power spectra and bispectra constructed from up to three redshift bins with photometric redshifts, and all relevant covariances in our analysis.We find that the parameter constraints from bispectrum tomography are comparable to those from power spectrum tomography.Combining the two improves parameter accuracies by a factor of 3 due to their complementarity.For the dark energy parameterization w(a) = w 0 + w a (1 -a), the marginalized errors from lensing alone are σ(w 0 ) ∼ 0.03f -1/2 sky and σ(w a ) ∼ 0.1f -1/2 sky .We show that these constraints can be further improved when combined with measurements of the cosmic microwave background or Type Ia supernovae.The amplitude and shape of the mass power spectrum are also shown to be precisely constrained.We use hyper-extended perturbation theory to compute the nonlinear lensing bispectrum for dark energy models.Accurate model predictions of the bispectrum in the moderately nonlinear regime, calibrated with numerical simulations, will be needed to realize the parameter accuracy we have estimated.Finally, we estimate how well the lensing bispectrum can constrain a model with primordial non-Gaussianity.
Peculiar velocities are one of the only probes of very large-scale mass density fluctuations in the nearby Universe. We present new "minimal variance" bulk flow measurements based upon the "First … Peculiar velocities are one of the only probes of very large-scale mass density fluctuations in the nearby Universe. We present new "minimal variance" bulk flow measurements based upon the "First Amendment" compilation of 245 Type Ia supernovae (SNe) peculiar velocities and find a bulk flow of 249 +/- 76 km/s in the direction l= 319 +/- 18 deg, b = 7 +/- 14 deg. The SNe bulk flow is consistent with the expectations of \Lambda CDM. However, it is also marginally consistent with the bulk flow of a larger compilation of non-SNe peculiar velocities (Watkins, Feldman, & Hudson 2009). By comparing the SNe peculiar velocities to predictions of the IRAS Point Source Catalog Redshift survey (PSCz) galaxy density field, we find \Omega_{m}^{0.55} \sigma_{8,lin} = 0.40 +/- 0.07, which is in agreement with \Lambda CDM. However, we also show that the PSCz density field fails to account for 150 +/- 43 km/s of the SNe bulk motion.
The Magellan active optics system has been operating continuously on the Baade 6.5-m since the start of science operations in February 2001. The active optical elements include the primary mirror, … The Magellan active optics system has been operating continuously on the Baade 6.5-m since the start of science operations in February 2001. The active optical elements include the primary mirror, with 104 actuators, and the secondary mirror, with 5 positional degrees of freedom. Shack-Hartmann (SH) wavefront sensors are an integral part of the dual probe guiders. The probes function interchangeably, with either probe capable of guiding or wavefront sensing. In the course of most routine observing stars brighter than 17th magnitude are used to apply corrections once or twice per minute. The rms radius determined from roughly 250 SH spots typically ranges between 0.05" and 0.10". The spot pattern is analyzed in terms of a mixture of 3 Zernike polynomials (used to correct the secondary focus and decollimation) and 12 bending modes of the primary mirror (used to compensate for residual thermal and gravitational distortions). Zernike focus and the lowest order circularly symmetric bending mode, known affectionately as the "conemode," are sufficiently non-degenerate that they can be solved for and corrected separately.
Type Ia Supernovae are standard candles so their mean apparent magnitude has been exploited to learn about the redshift-distance relationship. Besides intrinsic scatter in this standard candle, additional scatter is … Type Ia Supernovae are standard candles so their mean apparent magnitude has been exploited to learn about the redshift-distance relationship. Besides intrinsic scatter in this standard candle, additional scatter is caused by gravitational magnification by large scale structure. Here we probe the dependence of this dispersion on cosmological parameters and show that information about the amplitude of clustering, ${\ensuremath{\sigma}}_{8}$, is contained in the scatter. In principle, it will be possible to constrain ${\ensuremath{\sigma}}_{8}$ to within 5% with observations of 2000 Type Ia Supernovae. We identify three sources of systematic error---evolution of intrinsic scatter, baryon contributions to lensing, and non-Gaussianity of lensing---which will make this measurement difficult.
We have constructed a large-format mosaic CCD camera for the Sloan Digital Sky Survey. The camera consists of two arrays, a photometric array that uses 30 2048 × 2048 SITe/Tektronix … We have constructed a large-format mosaic CCD camera for the Sloan Digital Sky Survey. The camera consists of two arrays, a photometric array that uses 30 2048 × 2048 SITe/Tektronix CCDs (24 μm pixels) with an effective imaging area of 720 cm2 and an astrometric array that uses 24 400 × 2048 CCDs with the same pixel size, which will allow us to tie bright astrometric standard stars to the objects imaged in the photometric camera. The instrument will be used to carry out photometry essentially simultaneously in five color bands spanning the range accessible to silicon detectors on the ground in the time-delay–and–integrate (TDI) scanning mode. The photometric detectors are arrayed in the focal plane in six columns of five chips each such that two scans cover a filled stripe 25 wide. This paper presents engineering and technical details of the camera.
In light of the tension in cosmological constraints reported by the Planck team between their Sunyaev–Zel'dovich-selected cluster counts and Cosmic Microwave Background (CMB) temperature anisotropies, we compare the Planck cluster … In light of the tension in cosmological constraints reported by the Planck team between their Sunyaev–Zel'dovich-selected cluster counts and Cosmic Microwave Background (CMB) temperature anisotropies, we compare the Planck cluster mass estimates with robust, weak-lensing mass measurements from the Weighing the Giants (WtG) project. For the 22 clusters in common between the Planck cosmology sample and WtG, we find an overall mass ratio of 〈MPlanck/MWtG〉 = 0.688 ± 0.072. Extending the sample to clusters not used in the Planck cosmology analysis yields a consistent value of 〈MPlanck/MWtG〉 = 0.698 ± 0.062 from 38 clusters in common. Identifying the weak-lensing masses as proxies for the true cluster mass (on average), these ratios are ∼1.6σ lower than the default bias factor of 0.8 assumed in the Planck cluster analysis. Adopting the WtG weak-lensing-based mass calibration would substantially reduce the tension found between the Planck cluster count cosmology results and those from CMB temperature anisotropies, thereby dispensing of the need for 'new physics' such as uncomfortably large neutrino masses (in the context of the measured Planck temperature anisotropies and other data). We also find modest evidence (at 95 per cent confidence) for a mass dependence of the calibration ratio and discuss its potential origin in light of systematic uncertainties in the temperature calibration of the X-ray measurements used to calibrate the Planck cluster masses. Our results exemplify the critical role that robust absolute mass calibration plays in cluster cosmology, and the invaluable role of accurate weak-lensing mass measurements in this regard.
AM CVn systems are a rare (about a dozen previously known) class of cataclysmic variables, arguably encompassing the shortest orbital periods (down to about 10 minutes) of any known binaries. … AM CVn systems are a rare (about a dozen previously known) class of cataclysmic variables, arguably encompassing the shortest orbital periods (down to about 10 minutes) of any known binaries. Both binary components are thought to be degenerate (or partially so), likely with mass-transfer from a helium-rich donor onto a white dwarf, driven by gravitational radiation. Although rare, AM CVn systems are of high interest as possible SN Ia progenitors, and because they are predicted to be common sources of gravity waves in upcoming experiments such as LISA. We have identified four new AM CVn candidates from the Sloan Digital Sky Survey (SDSS) spectral database. All four show hallmark spectroscopic characteristics of the AM CVn class: each is devoid of hydrogen features, and instead shows a spectrum dominated by helium. All four show double-peaked emission, indicative of helium-dominated accretion disks. Limited time-series CCD photometric follow-on data have been obtained for three of the new candidates from the ARC 3.5m; most notably, a 28.3 minute binary period with sharp, deep eclipses is discovered in one case, SDSS J0926+3624. This is the first confirmed eclipsing AM CVn, and our data allow initial estimates of binary parameters for this ultracompact system. The four new SDSS objects also provide a substantial expansion of the currently critically-small sample of AM CVn systems.
A wide-field galaxy redshift survey allows one to probe galaxy clustering at largest spatial scales, which carries invaluable information on horizon-scale physics complementarily to the cosmic microwave background (CMB). Assuming … A wide-field galaxy redshift survey allows one to probe galaxy clustering at largest spatial scales, which carries invaluable information on horizon-scale physics complementarily to the cosmic microwave background (CMB). Assuming the planned survey consisting of $z\ensuremath{\sim}1$ and $z\ensuremath{\sim}3$ surveys with areas of 2000 and $300\text{ }\text{ }{\mathrm{deg}}^{2}$, respectively, we study the prospects for probing dark energy clustering from the measured galaxy power spectrum, assuming the dynamical properties of dark energy are specified in terms of the equation of state and the effective sound speed ${c}_{\mathrm{e}}$ in the context of an adiabatic cold dark dominated matter model. The dark energy clustering adds a power to the galaxy power spectrum amplitude at spatial scales greater than the sound horizon, and the enhancement is sensitive to redshift evolution of the net dark energy density, i.e. the equation of state. We find that the galaxy survey, when combined with CMB expected from the Planck satellite mission, can distinguish dark energy clustering from a smooth dark energy model such as the quintessence model (${c}_{\mathrm{e}}=1$), when ${c}_{\mathrm{e}}\ensuremath{\lesssim}0.04$ (0.02) in the case of the constant equation of state ${w}_{0}=\ensuremath{-}0.9$ ($\ensuremath{-}0.95$). An ultimate full-sky survey of $z\ensuremath{\sim}1$ galaxies allows the detection when ${c}_{\mathrm{e}}\ensuremath{\lesssim}0.08$ (0.04) for ${w}_{0}=0.9$ ($\ensuremath{-}0.95$). These forecasts show a compatible power with an all-sky CMB and galaxy cross correlation that probes the integrated Sachs-Wolfe effect. We also investigate a degeneracy between the dark energy clustering and the nonrelativistic neutrinos implied from the neutrino oscillation experiments, because the two effects both induce a scale-dependent modification in the galaxy power spectrum shape at largest spatial scales accessible from the galaxy survey. It is shown that a wider redshift coverage can efficiently separate the two effects by utilizing the different redshift dependences, where dark energy clustering is apparent only at low redshifts $z\ensuremath{\lesssim}1$.
Abstract This paper presents the design and science goals for the SkyMapper telescope. SkyMapper is a 1.3-m telescope featuring a 5.7-square-degree field-of-view Cassegrain imager commissioned for the Australian National University's … Abstract This paper presents the design and science goals for the SkyMapper telescope. SkyMapper is a 1.3-m telescope featuring a 5.7-square-degree field-of-view Cassegrain imager commissioned for the Australian National University's Research School of Astronomy and Astrophysics. It is located at Siding Spring Observatory, Coonabarabran, NSW, Australia and will see first light in late 2007. The imager possesses 16 384 × 16 384 0.5-arcsec pixels. The primary scientific goal of the facility is to perform the Southern Sky Survey, a six-colour and multi-epoch (four-hour, one-day, one-week, one-month and one-year sampling) photometric survey of the southerly 2π sr to g ∼23 mag. The survey will provide photometry to better than 3% global accuracy and astrometry to better than 50 milliarcsec. Data will be supplied to the community as part of the Virtual Observatory effort. The survey will take five years to complete.
Due to their proximity, high dark-matter content, and apparent absence of non-thermal processes, Milky Way dwarf spheroidal satellite galaxies (dSphs) are excellent targets for the indirect detection of dark matter. … Due to their proximity, high dark-matter content, and apparent absence of non-thermal processes, Milky Way dwarf spheroidal satellite galaxies (dSphs) are excellent targets for the indirect detection of dark matter. Recently, eight new dSph candidates were discovered using the first year of data from the Dark Energy Survey (DES). We searched for gamma-ray emission coincident with the positions of these new objects in six years of Fermi Large Area Telescope data. We found no significant excesses of gamma-ray emission. Under the assumption that the DES candidates are dSphs with dark matter halo properties similar to the known dSphs, we computed individual and combined limits on the velocity-averaged dark matter annihilation cross section for these new targets. If the estimated dark-matter content of these dSph candidates is confirmed, they will constrain the annihilation cross section to lie below the thermal relic cross section for dark matter particles with masses &lt; 20 GeV annihilating via the b-bbar or tau+tau- channels.
We use SDSS photometry of 73 million stars to simultaneously obtain best-fit main-sequence stellar energy distribution (SED) and amount of dust extinction along the line of sight towards each star. … We use SDSS photometry of 73 million stars to simultaneously obtain best-fit main-sequence stellar energy distribution (SED) and amount of dust extinction along the line of sight towards each star. Using a subsample of 23 million stars with 2MASS photometry, whose addition enables more robust results, we show that SDSS photometry alone is sufficient to break degeneracies between intrinsic stellar color and dust amount when the shape of extinction curve is fixed. When using both SDSS and 2MASS photometry, the ratio of the total to selective absorption, $R_V$, can be determined with an uncertainty of about 0.1 for most stars in high-extinction regions. These fits enable detailed studies of the dust properties and its spatial distribution, and of the stellar spatial distribution at low Galactic latitudes. Our results are in good agreement with the extinction normalization given by the Schlegel et al. (1998, SFD) dust maps at high northern Galactic latitudes, but indicate that the SFD extinction map appears to be consistently overestimated by about 20% in the southern sky, in agreement with Schlafly et al. (2010). The constraints on the shape of the dust extinction curve across the SDSS and 2MASS bandpasses support the models by Fitzpatrick (1999) and Cardelli et al. (1989). For the latter, we find an $R_V=3.0\pm0.1$(random) $\pm0.1$(systematic) over most of the high-latitude sky. At low Galactic latitudes (|b|<5), we demonstrate that the SFD map cannot be reliably used to correct for extinction as most stars are embedded in dust, rather than behind it. We introduce a method for efficient selection of candidate red giant stars in the disk, dubbed "dusty parallax relation", which utilizes a correlation between distance and the extinction along the line of sight. We make these best-fit parameters, as well as all the input SDSS and 2MASS data, publicly available in a user-friendly format.
Deep multicolor galaxy surveys with photometric redshifts will provide a large number of two-point correlation observables: galaxy-galaxy angular correlations, galaxy-shear cross correlations, and shear-shear correlations between all redshifts. These observables … Deep multicolor galaxy surveys with photometric redshifts will provide a large number of two-point correlation observables: galaxy-galaxy angular correlations, galaxy-shear cross correlations, and shear-shear correlations between all redshifts. These observables can potentially enable a joint determination of the dark-energy-dependent evolution of the dark matter and distances as well as the relationship between galaxies and dark matter halos. With recent cosmic microwave background determinations of the initial power spectrum, a measurement of the mass clustering at even a single redshift will constrain a well-specified combination of dark energy (DE) parameters in a flat universe; we provide convenient fitting formulas for such studies. The combination of galaxy-shear and galaxy-galaxy correlations can determine this amplitude at multiple redshifts. We illustrate this ability in a description of the galaxy clustering with 5 free functions of redshift which can be fitted from the data. The galaxy modeling is based on a mapping onto halos of the same abundance that models a flux-limited selection. In this context and under a flat geometry, a 4000 deg${}^{2}$ galaxy-lensing survey can achieve a statistical precision of $\ensuremath{\sigma}({\ensuremath{\Omega}}_{\mathrm{DE}})=0.005$ for the dark energy density, $\ensuremath{\sigma}{(w}_{\mathrm{DE}})=0.02$ and $\ensuremath{\sigma}{(w}_{a})=0.17$ for its equation of state and evolution, evaluated at dark energy matter equality $z\ensuremath{\approx}0.4,$ as well as constraints on the 5 halo functions out to $z=1.$ More importantly, a joint analysis can make dark energy constraints robust against systematic errors in the shear-shear correlation and halo modeling.
We detect the correlated peculiar velocities of nearby type Ia supernovae (SNe), while highlighting an error in some of the literature. We find sigma8 = 0.79 +/- 0.22 from SNe, … We detect the correlated peculiar velocities of nearby type Ia supernovae (SNe), while highlighting an error in some of the literature. We find sigma8 = 0.79 +/- 0.22 from SNe, and examine the potential of this method to constrain cosmological parameters in the future. We demonstrate that a survey of 300 low-z SNe (such as the nearby SNfactory) will underestimate the errors on w by approximately 35% if the coherent peculiar velocities are not included.
We report measurements of the mass density, ΩM, and cosmological-constant energy density, ΩΛ, of the universe based on the analysis of 42 type Ia supernovae discovered by the Supernova Cosmology … We report measurements of the mass density, ΩM, and cosmological-constant energy density, ΩΛ, of the universe based on the analysis of 42 type Ia supernovae discovered by the Supernova Cosmology Project. The magnitude-redshift data for these supernovae, at redshifts between 0.18 and 0.83, are fitted jointly with a set of supernovae from the Calán/Tololo Supernova Survey, at redshifts below 0.1, to yield values for the cosmological parameters. All supernova peak magnitudes are standardized using a SN Ia light-curve width-luminosity relation. The measurement yields a joint probability distribution of the cosmological parameters that is approximated by the relation 0.8ΩM-0.6ΩΛ≈-0.2±0.1 in the region of interest (ΩM≲1.5). For a flat (ΩM+ΩΛ=1) cosmology we find ΩMflat=0.28+0.09-0.08 (1 σ statistical) +0.05-0.04 (identified systematics). The data are strongly inconsistent with a Λ=0 flat cosmology, the simplest inflationary universe model. An open, Λ=0 cosmology also does not fit the data well: the data indicate that the cosmological constant is nonzero and positive, with a confidence of P(Λ>0)=99%, including the identified systematic uncertainties. The best-fit age of the universe relative to the Hubble time is t0flat=14.9+1.4-1.1(0.63/h) Gyr for a flat cosmology. The size of our sample allows us to perform a variety of statistical tests to check for possible systematic errors and biases. We find no significant differences in either the host reddening distribution or Malmquist bias between the low-redshift Calán/Tololo sample and our high-redshift sample. Excluding those few supernovae that are outliers in color excess or fit residual does not significantly change the results. The conclusions are also robust whether or not a width-luminosity relation is used to standardize the supernova peak magnitudes. We discuss and constrain, where possible, hypothetical alternatives to a cosmological constant.
We present the statistical properties of the first version of the Cold Core Catalogue of Planck Objects (C3PO), in terms of their spatial distribution, temperature, distance, mass, and morphology. We … We present the statistical properties of the first version of the Cold Core Catalogue of Planck Objects (C3PO), in terms of their spatial distribution, temperature, distance, mass, and morphology. We also describe the statistics of the Early Cold Core Catalogue (ECC, delivered with the Early Release Compact Source Catalogue, ERCSC) that is the subset of the 915 most reliable detections of the complete catalogue. We have used the CoCoCoDeT algorithm to extract 10783 cold sources. Temperature and dust emission spectral index {\beta} values are derived using the fluxes in the IRAS 100 \mum band and the three highest frequency Planck bands. Temperature spans from 7K to 17K, and peaks around 13K. Data are not consistent with a constant value of {\beta} over the all temperature range. {\beta} ranges from 1.4 to 2.8 with a mean value around 2.1, and several possible scenarios are possible, including {\beta}(T) and the effect of multiple T components folded into the measurements. For one third of the objects the distances are obtained. Most of the detections are within 2 kpc in the Solar neighbourhood, but a few are at distances greater than 4 kpc. The cores are distributed from the deep Galactic plane, despite the confusion, to high latitudes (>30$^{\circle}$). The associated mass estimates range from 1 to $10^5$ solar masses. Using their physical properties these cold sources are shown to be cold clumps, defined as the intermediate cold sub-structures between clouds and cores. These cold clumps are not isolated but mostly organized in filaments associated with molecular clouds. The Cold Core Catalogue of Planck Objects (C3PO) is the first unbiased all-sky catalogue of cold objects. It gives an unprecedented statistical view to the properties of these potential pre-stellar clumps and offers a unique possibility for their classification in terms of their intrinsic properties and environment.
We present a new model for computing the spectral evolution of stellar populations at ages between 100,000 yr and 20 Gyr at a resolution of 3 A across the whole … We present a new model for computing the spectral evolution of stellar populations at ages between 100,000 yr and 20 Gyr at a resolution of 3 A across the whole wavelength range from 3200 to 9500 A for a wide range of metallicities. These predictions are based on a newly available library of observed stellar spectra. We also compute the spectral evolution across a larger wavelength range, from 91 A to 160 micron, at lower resolution. The model incorporates recent progress in stellar evolution theory and an observationally motivated prescription for thermally-pulsing stars on the asymptotic giant branch. The latter is supported by observations of surface brightness fluctuations in nearby stellar populations. We show that this model reproduces well the observed optical and near-infrared colour-magnitude diagrams of Galactic star clusters of various ages and metallicities. Stochastic fluctuations in the numbers of stars in different evolutionary phases can account for the full range of observed integrated colours of star clusters in the Magellanic Clouds. The model reproduces in detail typical galaxy spectra from the Early Data Release (EDR) of the Sloan Digital Sky Survey (SDSS). We exemplify how this type of spectral fit can constrain physical parameters such as the star formation history, metallicity and dust content of galaxies. Our model is the first to enable accurate studies of absorption-line strengths in galaxies containing stars over the full range of ages. Using the highest-quality spectra of the SDSS EDR, we show that this model can reproduce simultaneously the observed strengths of those Lick indices that do not depend strongly on element abundance ratios [abridged].
Context. Our Local Group of galaxies appears to be moving relative to the cosmic microwave background with the source of the peculiar motion still uncertain. While in the past this … Context. Our Local Group of galaxies appears to be moving relative to the cosmic microwave background with the source of the peculiar motion still uncertain. While in the past this has been studied mostly using galaxies as distance indicators, the weight of type Ia supernovae (SNe Ia) has increased recently with the continuously improving statistics of available low-redshift supernovae. Aims. We measured the bulk flow in the nearby universe ($0.015 < z < 0.1$) using 117 SNe Ia observed by the Nearby Supernova Factory, as well as the Union2 compilation of SN Ia data already in the literature. Methods. The bulk flow velocity was determined from SN data binned in redshift shells by including a coherent motion (dipole) in a cosmological fit. Additionally, a method of spatially smoothing the Hubble residuals was used to verify the results of the dipole fit. To constrain the location and mass of a potential mass concentration (e.g., the Shapley supercluster) responsible for the peculiar motion, we fit a Hubble law modified by adding an additional mass concentration. Results. The analysis shows a bulk flow that is consistent with the direction of the CMB dipole up to $z \sim 0.06$, thereby doubling the volume over which conventional distance measures are sensitive to a bulk flow. We see no significant turnover behind the center of the Shapley supercluster. A simple attractor model in the proximity of the Shapley supercluster is only marginally consistent with our data, suggesting the need for another, more distant source. In the redshift shell $0.06 < z < 0.1$, we constrain the bulk flow velocity to $< 240~\textrm{km s}^{-1}$ (68% confidence level) for the direction of the CMB dipole, in contradiction to recent claims of the existence of a large-amplitude dark flow.
While gas accretion onto some massive black holes (MBHs) at the centers of galaxies actively powers luminous emission, the vast majority of MBHs are considered dormant. Occasionally, a star passing … While gas accretion onto some massive black holes (MBHs) at the centers of galaxies actively powers luminous emission, the vast majority of MBHs are considered dormant. Occasionally, a star passing too near a MBH is torn apart by gravitational forces, leading to a bright panchromatic tidal disruption flare (TDF). While the high-energy transient Swift J164449.3+573451 ("Sw 1644+57") initially displayed none of the theoretically anticipated (nor previously observed) TDF characteristics, we show that the observations (Levan et al. 2011) suggest a sudden accretion event onto a central MBH of mass ~10^6-10^7 solar masses. We find evidence for a mildly relativistic outflow, jet collimation, and a spectrum characterized by synchrotron and inverse Compton processes; this leads to a natural analogy of Sw 1644+57 with a smaller-scale blazar. The phenomenologically novel Sw 1644+57 thus connects the study of TDFs and active galaxies, opening a new vista on disk-jet interactions in BHs and magnetic field generation and transport in accretion systems.
We give an overview of the Galaxy Evolution Explorer (GALEX), a NASA Explorer Mission launched on April 28, 2003. GALEX is performing the first space UV sky-survey, including imaging and … We give an overview of the Galaxy Evolution Explorer (GALEX), a NASA Explorer Mission launched on April 28, 2003. GALEX is performing the first space UV sky-survey, including imaging and grism surveys in two bands (1350-1750 Angstroms and 1750-2750 Angstroms). The surveys include an all-sky imaging survey (m[AB] ~ 20.5), a medium imaging survey of 1000 square degrees (m[AB] ~ 23), a deep imaging survey of 100 square degrees (m[AB] ~ 25), and a nearby galaxy survey. Spectroscopic grism surveys (R=100-200) are underway with various depths and sky coverage. Many targets overlap existing or planned surveys. We will use the measured UV properties of local galaxies, along with corollary observations, to calibrate the UV-global star formation rate relationship in local galaxies. We will apply this calibration to distant galaxies discovered in the deep imaging and spectroscopic surveys to map the history of star formation in the universe over the redshift range 0 &lt; z &lt; 1.5, and probe the physical drivers of star formation in galaxies. The GALEX mission includes a Guest Investigator program supporting the wide variety of programs made possible by the first UV sky survey.
The cosmological gamma-ray burst (GRB) phenomenon is reviewed. The broad observational facts and empirical phenomenological relations of the GRB prompt emission and afterglow are outlined. A well-tested, successful fireball shock … The cosmological gamma-ray burst (GRB) phenomenon is reviewed. The broad observational facts and empirical phenomenological relations of the GRB prompt emission and afterglow are outlined. A well-tested, successful fireball shock model is introduced in a pedagogical manner. Several important uncertainties in the current understanding of the phenomenon are reviewed, and prospects of how future experiments and extensive observational and theoretical efforts may address these problems are discussed.
The huge size and uniformity of the Sloan Digital Sky Survey makes possible an exacting test of current models of galaxy formation. We compare the predictions of the GALFORM semi-analytical … The huge size and uniformity of the Sloan Digital Sky Survey makes possible an exacting test of current models of galaxy formation. We compare the predictions of the GALFORM semi-analytical galaxy formation model for the luminosities, morphologies, colours and scale-lengths of local galaxies. GALFORM models the luminosity and size of the disk and bulge components of a galaxy, and so we can compute quantities which can be compared directly with SDSS observations, such as the Petrosian magnitude and the Sersic index. We test the predictions of two published models set in the cold dark matter cosmology: the Baugh et al. (2005) model, which assumes a top-heavy initial mass function (IMF) in starbursts and superwind feedback, and the Bower et al. (2006) model, which uses AGN feedback and a standard IMF. The Bower et al model better reproduces the overall shape of the luminosity function, the morphology-luminosity relation and the colour bimodality observed in the SDSS data, but gives a poor match to the size-luminosity relation. The \Baugh et al. model successfully predicts the size-luminosity relation for late-type galaxies. Both models fail to reproduce the sizes of bright early-type galaxies. These problems highlight the need to understand better both the role of feedback processes in determining galaxy sizes, in particular the treatment of the angular momentum of gas reheated by supernovae, and the sizes of the stellar spheroids formed by galaxy mergers and disk instabilities.
We devise a method to measure the abundance of satellite halos in gravitational lens galaxies and apply our method to a sample of seven lens systems. After using Monte Carlo … We devise a method to measure the abundance of satellite halos in gravitational lens galaxies and apply our method to a sample of seven lens systems. After using Monte Carlo simulations to verify the method, we find that substructure comprises fsat = 0.02 (median, 0.006 < fsat < 0.07 at 90% confidence) of the mass of typical lens galaxies, in excellent agreement with predictions of cold dark matter (CDM) simulations. We estimate a characteristic critical radius for the satellites of 00001 < b < 0006 (90% confidence). For a dn/dM ∝ M-1.8 (Mlow < M < Mhigh) satellite mass function, the critical radius provides an estimate for the upper mass limit of 106 M☉ ≲ Mhigh ≲ 109 M☉. Our measurement confirms a generic prediction of CDM models and may obviate the need to invoke alternatives to CDM such as warm dark matter or self-interacting dark matter.
Recent work has emphasized the possibility to probe non-Gaussianity of local type by measuring the power spectrum of highly biased tracers of large scale structure on very large scales. This … Recent work has emphasized the possibility to probe non-Gaussianity of local type by measuring the power spectrum of highly biased tracers of large scale structure on very large scales. This method is limited by the cosmic variance, by the finite number of structures on the largest scales, and by the partial degeneracy with other cosmological parameters that can mimic the same effect. We propose an alternative method based on the fact that on large scales, halos are linearly biased, but not stochastic, tracers of dark matter: by correlating a highly biased tracer of large scale structure against an unbiased tracer, one eliminates the cosmic variance error, which can lead to a significant increase in signal to noise. For an ideal survey out to z approximately 2, the error reduction can be as large as a factor of 7, which should guarantee a detection of non-Gaussianity from an all-sky survey of this type.
Abridged: We estimate the distances to ~48 million stars detected by the Sloan Digital Sky Survey and map their 3D number density distribution in 100 < D < 20 kpc … Abridged: We estimate the distances to ~48 million stars detected by the Sloan Digital Sky Survey and map their 3D number density distribution in 100 < D < 20 kpc range over 6,500 deg^2 of sky. The data show strong evidence for a Galaxy consisting of an oblate halo, a disk component, and a number of localized overdensities with exponential disk parameters (bias-corrected for an assumed 35% binary fraction) H_1 = 300 pc, L_1 = 2600 pc, H_2 = 900 pc, L_2 = 3600 pc, and local density normalization of 12%. We find the halo to be oblate, with best-fit axis ratio c/a = 0.64, r^{-2.8} profile, and the local halo-to-thin disk normalization of 0.5%. We estimate the errors of derived model parameters to be no larger than ~20% (disk scales) and ~10% (thick disk normalization). While generally consistent with the above model, the density distribution shows a number of statistically significant localized deviations. We detect two overdensities in the thick disk region at (R, Z) ~ (6.5, 1.5)kpc and (R, Z) ~ (9.5, 0.8) kpc, and a remarkable density enhancement in the halo covering >1000deg^2 of sky towards the constellation of Virgo, at distances of ~6-20 kpc. Compared to a region symmetric with respect to the l=0 line, the Virgo overdensity is responsible for a factor of 2 number density excess and may be a nearby tidal stream or a low-surface brightness dwarf galaxy merging with the Milky Way. After removal of the resolved overdensities, the remaining data are consistent with a smooth density distribution; we detect no evidence of further unresolved clumpy substructure at scales ranging from ~50pc in the disk, to ~1 - 2 kpc in the halo.
We present a method to infer reddenings and distances to stars based only on their broad-band photometry, and show how this method can be used to produce a three-dimensional (3D) … We present a method to infer reddenings and distances to stars based only on their broad-band photometry, and show how this method can be used to produce a three-dimensional (3D) dust map of the Galaxy. Our method samples from the full probability density function of distance, reddening, and stellar type for individual stars, as well as the full uncertainty in reddening as a function of distance in the 3D dust map. We incorporate prior knowledge of the distribution of stars in the Galaxy and the detection limits of the survey. For stars in the Pan-STARRS 1 (PS1) 3π survey, we demonstrate that our reddening estimates are unbiased and accurate to ∼0.13 mag in E(B − V) for the typical star. Based on comparisons with mock catalogs, we expect distances for main-sequence stars to be constrained to within ∼20%–60%, although this range can vary, depending on the reddening of the star, the precise stellar type, and its position on the sky. A later paper will present a 3D map of dust over the three quarters of the sky surveyed by PS1. Both the individual stellar inferences and the 3D dust map will enable a wealth of Galactic science in the plane. The method we present is not limited to the passbands of the PS1 survey but may be extended to incorporate photometry from other surveys, such as the Two Micron All Sky Survey, the Sloan Digital Sky Survey (where available), and in the future, LSST and Gaia.